Development 1
Concept: Inspired by Google’s paper signal, this first design features a voice activated intermediary device. Here, I explored the concept of physicalizing control through your voice purely. Using a DIY approach because of how the voice module works, users would need to record their voice and set the different commands (through codes) in relations to the privacy choices for each command. Since this preset is done before the product could function to visualise data loss, users would have more understanding about the way it works.
Process: Individual users must calibrate the device by recording their voices on the different commands they want each bubble gun to react to. The idea is whenever the users(after calibration) say that command to Alexa, it would also trigger the bubble gun, acting as a reminder to that particular data loss. depending on how much liquid is placed on each cup, it determines how much of that data users are willing to let go.
In moving forward, I think this approach doesn’t give extra knowledge on the data collection process, instead just poses as a reminder for users. I think that there is definitely an improvement to make, that is to depict the underlying data that are collected from users when they command alexa to do something.
Which data is necessary to be used?
Referring to the previous Alexa system analysis, I would like to show the users regarding underlying datas that are used to fullfill commands to Alexa.Due to the large scope of skill Alexa could cover, I will refer to the level of personalisation (refer to roleplay experiment) to generate data (in the form of keywords) for the intermediary device to detect when it hears a command. I think this would provide a good explanation of the devices’ functions. The underlying data list is based on the API information needed to generate an answer to the particular keyword. This is stated on the Amazon developer site.
Development 2
Concept: Using Personal, family and public data grouping as well as the electronic modules of Arduino uno along with the ILS voice recognizer and servo motors. The push buttons allow users to decide how they want to catagorise different commands by implementing the three datasets as a basis.
How it works: The voice recognition detects the keywords of a command which are used to turn the system function on. These keywords are similar to how the Amazon server detects the word “weather” within a user command. The push buttons are used to classify the data after the detection which trigger the servo motors to turn on the bubble guns.
After making this prototype, I realised that there’s lack of visualisation of the data that are stored in the device which prevents users to fully understand the device’s functionality. A digital interface like an LCD or a touch screen is something I might consider using to present this feature. I will continue working on the functionality of this product in the next stage of development to ensure a transparent information flow and a more intuitive user experience.
Comments