Releases



  • Version 1.6. [CURRENT]
    • Availability of a new component for maintaining conversational experiences via DialogFlow API: Dialog
    • Several improvements on the component ActivityTracker for store data on a Learning Record Store (Learning Locker)
    • Several improvements on the components SemanticConcept and ConceptExplorer for query any RDF endpoint
    • Availability of a xAPI block category with the xAPI verbs catalog included
    • Availability of a new component for executing any BPMN workflow: Workflow

  • Version 1.5
    • Availability of tree new components for allows viewing objects, images and videos in 360º, the video locally as in streaming: VR3DObject, VRImage360 and VRVideo360
    • Availability of a new component for managing bluetooth remote control, allows interacting with a virtual reality environment: VRController
    • Availability of a new component to set up the options of the virtual reality scene: VRScene
    • Availability of a Streams block category for analysis from the mobile device on live
    • Several improvements on the components ActivityTracker, ActivitySimpleQuery, and ActivityAggregationQuery for remote streaming analysis with Apache Flink, for sending notifications to a message queue (Apache Kafka), for a new NoSQL (MongoDB) storage
    • Several improvements on the component BrainWaveSensor incorporating new events to collect data from the sensors.

  • Version 1.4
    • Availability of a new component for managing devices for brain computer interfaces, such as Emotiv Epoc+ and Emotiv Insight: BrainwaveSensor
    • Availability of a new component for managing Myo, a wearable gesture control device: ArmbandGestureSensor
    • Availability of two new components for automatically retrieving information from Wikidata (central storage for Wikipedia): ConceptExplorer and SemanticConcept
    • Availability of a new component for rendering 3D models: Model3DViewer
    • Several improvements on the component for managing Sphero robots

  • Version 1.3
    • Availability of a new component for managing Sphero robots: SpheroController
    • Availability of new specific components for querying and representing analytical data: ActivitySimpleQuery, ActivityAggregationQuery, Chart and DataTable

  • Version 1.2
    • Availability of a new component for managing gestures: HandGestureSensor
    • Availability of new components for character recognition in augmented reality scenarios and overlaying an action bar ARTextTracker and ARCameraOverLayer
    • Availability of a new component for messaging: GoogleCloudMessaging
    • Availability of a new component for communicating with a Internet of the Things service: ThingSpeakLocationSensor
    • Availability of a new component for retrieving information about the user's device: DeviceInfo
    • Several improvements on the component for tracking analytical data

  • Version 1.1
    • Availability of a component for tracking analytical data: ActivityTracker
    • Availability of a new component for augmented reality: ARImageAsset
    • Several improvements on the existing components for augmented reality
  • Version 1.0
    • Availability of specific components for augmented reality: ARCamera, ARMarkerTracker, ARObjectTracker, AR3DModelAsset, AR3DImageAsset

Getting Started


To work with VEDILS you only need a web browser (we recommend Google Chrome) and then access the website. VEDILS is composed of two parts: one for the app design [Figure 1], where we drag the elements from the toolbox (that can be configured later in the settings tab), and another called blocks [Figure 2] where we can define the logic behind the app, by using a visual language.


Once we have developed our app, we must generate the .apk file in order to install it on our Android mobile devices. To do so [Figure 3] we first need to click on the option “Build - App (save .apk to my computer)”. Once we have copied the .apk file on our mobile device we only need to open the file to install the app. Don’t forget that our device must be configured to install third-party applications. For further information you can watch the following video.



>

The appearance of our apps on the design view may not coincide with their display on the mobile devices. To get an idea in real time of what our app looks like, how virtual objects are displayed, etc. you can use the VEDILS Companion app.


Steps to tests our VEDILS apps
  • First we must install VEDILS Companion on our mobile device. Then we must connect (through WiFi) the mobile device with the app we are developing.
  • Tap “Connect - Al Companion” on the VEDILS website. A new window will appear with a QR code.
  • Open the VEDILS Companion app on our mobile device. Then tap “Scan QR Code” option and point to the QR code provided by VEDILS.

VEDILS COMPONENTS (extensions to MIT App Inventor)


VEDILS Augmented reality (+Info)
  • AR3DModelAsset: It allows showing 3D models in .OBJ, .3DS and .MDS format.
  • ARImageAsset: It shows images in 2D format.
  • ARCamera: It opens the camera and displays the augmented reality.
  • ARCameraOverLayer: It shows information and buttons on the image in real-time and captured by the camera.
  • ARTextTracker: Text recognition.
  • ARMarkerTracker: Recognition of up to 512 augmented reality markers. A PDF file with custom markers is available here.
  • ARObjectTracker: Image recognition.

VEDILS Virtual reality (+Info)
  • VR3DObject: It allows view 3D objects in .OBJ, .3DS, .MD2 format.
  • VRController:It set up a bluetooth control remote, allowing interact with a virtual reality environment.
  • VRImage360: It lets view 360 degree images.
  • VRScene: It set up the options of the virtual reality scene.
  • VRVideo360: It shows videos in 360º in local as in streaming.

VEDILS Learning Analytics (+Info)
  • Block category "Streams" incorporating functions to work with streams: filter, map, reduce, sort and limit.
  • ActivityAggregationQuery: It allows to issue aggregation queries to the Fusion Tables and MongoDB service in order to retrieve metrics, it allows too send stream queries to Apache Flink.
  • ActivitySimpleQuery: It allows to issue queries to the Fusion Tables service in order to select data (FusionTables, MongoDB and streaming queries with Apache Flink).
  • ActivityTracker: It allows to register, on Google Fusion Tables and MongoDB, the user’s interaction with the app, it allows to send the data to a message queue (Apache Kafka).
  • Chart: It displays a (bar/line) chart with data.
  • DataTable: It displays a table with data.

VEDILS Interactions
  • BrainwaveSensor: It allows to capture brain activity patterns if a Emotiv device is connected (see tutorial).
  • ArmbandGestureSensor: It allows to capture user arm gestures if a Myo device is connected (see tutorial).
  • HandGestureSensor: It allows to capture user hand gestures if a Leap Motion device is connected (see tutorial). To be able to use this device you must install the SDKLeapMotion.
  • Dialog: It allows to recognize voice commands and trigger proper actions according to a specific dialog defined with DialogFlow.

VEDILS Robotics (+Info)
  • SphereController: It allows to remotely control Sphero devices and receive events such as collisions

VEDILS Knowledge
  • ConceptExplorer: It automatically obtains data entities from Wikipedia articles (see tutorial).
  • SemanticConcept: It allows to retrieve data about specific concepts, from general properties (id, description, image, etc.) to domain specific properties

VEDILS Comunications
  • DeviceInfo: It obtains information from the device in order to identify it.
  • GoogleCloudMessaging: It allows to send messages among different mobile devices.
  • ThingSpeakLocationSensor: Tests with ThingSpeak in order to retrieve and show the device’s location.

Experimental
  • Model3DViewer: It allows rendering 3D models in the app (see tutorial).

Improved Components
  • SpeechRecognizer: is a new component, that changes the language of speech recognition and dictation, example uses: de, en, es, fr, it.

PUBLICATIONS



[Papers in proceedings]

  • Mota, J. M., Ruiz-Rube, I., Dodero, J. M., & Arnedillo-Sánchez, I. (2018). Augmented reality mobile app development for all. Computers & Electrical Engineering, 65, 250-260.
  • Rodríguez-Corral J.M., Ruiz-Rube, I., Civit-Balcells, A., Mota J.M., Morgado-Estévez A., Dodero, J.M. A Study on the Suitability of Visual Languages for Robot Programming. (To be published).

[Book chapters]

  • Mota, J. M., Ruiz-Rube, I., Dodero, J. M., Person, T., & Arnedillo-Sánchez, I. (2018). Learning Analytics in Mobile Applications Based on Multimodal Interaction. In Software Data Engineering for Network eLearning Environments (pp. 67-92). Springer, Cham.

[Papers in conference proceedings]

  • Berns, A., Mota, J.M., Ruiz-Rube, Iván and Dodero, J.M., (2018, October). Exploring the potential of a 360º video application for foreign language learning. In Proceedings of the 6th International Conference on Technological Ecosystems for Enhancing Multiculturality. ACM
  • Person, T., Mota, J.M., Listán, M.C., Ruiz-Rube, I., Dodero, J.M., Rambla, F., Muriel, C., Ruiz, A., and Vidal J.M. (2018, October). Authoring of educational mobile apps for the mathematics-learning analysis. In Proceedings of the 6th International Conference on Technological Ecosystems for Enhancing Multiculturality. ACM
  • Person, T., Ruiz-Rube, I., and Dodero, J.M. (2018, May) Exploiting the Web of Data for the creation of mobile apps by non-expert programmers. In Proceedings of the International Workshop on Learning and Education with Web Data. ACM Conference on Web Science.
  • Dodero, J. M., Mota, J. M., & Ruiz-Rube, I. (2017, October). Bringing computational thinking to teachers' training: a workshop review. In Proceedings of the 5th International Conference on Technological Ecosystems for Enhancing Multiculturality (p. 4). ACM.
  • Balderas, A., Ruiz-Rube, I., Mota, J. M., Dodero, J. M., & Palomo-Duarte, M. (2016, November). A development environment to customize assessment through students interaction with multimodal applications. In Proceedings of the Fourth International Conference on Technological Ecosystems for Enhancing Multiculturality (pp. 1043-1048). ACM.
  • Mota, J. M., Ruiz-Rube, I., Dodero, J. M., & Figueiredo, M. (2016). Visual Environment for Designing Interactive Learning Scenarios with Augmented Reality. In Proceedings of the 12th International Conference on Mobile Learning 2016
  • Ruiz-Rube, I., Mota, J. M., Person, T., Berns, A., & Dodero, J. M. (2016, September). Autoría y analítica de aplicaciones móviles educativas multimodales. In Proceedings of SIIE 2016. Simposio Internacional de Informática Educativa
  • Ruiz-Rube, I., Mota, J. M., Dodero, J.M. (2016). Diseño de escenarios de aprendizaje interactivos para su despliegue sobre dispositivos móviles. In Libro de actas de XIII Foro Internacional sobre Evaluación de la Calidad de la Investigación y de la Educación Superior

[Posters and other communications]

  • Person, T., Ruiz-Rube, I., & Dodero, J. M. (2018). Towards a Methodology and a Toolkit to Analyse Data for Novices in Computer Programming. Learning Analytics Summer Institute Spain
  • Mota, J.M., and Ruiz-Rube, I.. (2017). VEDILS: a toolkit for developing Android mobile apps supporting mobile analytics. Seventh European Business Intelligence & Big Data Summer School (eBISS 2017)
  • Ruiz-Rube, I., and Mota, J.M. (2017). A BI platform for analysing mobile app development process based on visual languages. Seventh European Business Intelligence & Big Data Summer School (eBISS 2017)
  • Person, T. Ruiz-Rube, I. , Sibón, T. (2017). Aportación de la Ingeniería Informática en la creación del APP “A manos llenas”. Cuentos accesibles como recurso didáctico. Congreso Internacional Sobre Escritura Y Sordera
  • Mota, J.M, Ruiz-Rube, I., Dodero, J.M. (2017). Desarrollo sencillo de apps para la enseñanza y aprendizaje de la lengua de signos. Congreso Internacional Sobre Escritura Y Sordera