Conclusion
Summary of accomplishments
After quite a bit of trial, error, and frustration we were able to fully configure the Google Cloud API (Speech to Text and Text to Speech) in the terminal, get user input to be read from the terminal into Max/MSP. Have the user able to ask both preset and their own questions. The AI outputs responses both from Twitter and preset answers.
Servo petals opening and closing as well as the leaves configured with the expressions of sad, angry and happy.
Google Twitter Sentiment Analysis API configuration. Glitchi can read and say tweets.
Neopixel light output the colour representation of emotions.
100% of the flower has been constructed.
Servos no longer overheated since we decided to power the servos from 2 Arduinos.
Connecting the servos and neopixels to Max MSP and Arduino.
Having Glitchi listen and reply with tweets from Twitter.
Having the HDMI screen display what Glitchi has heard and what his response.
Having Glitchi express his emotions through the petals, leaves, and lights, with his reply.
Making Glitchi taller and prettier with different petals, leaves and the wooden base.
Challenges for future work
Improving upon Glichi`s Design, making sure that the HDMI screen fits properly into the box window. Having the light in the middle of the flower so that it would reflect on the petals. Covering the wires better to make it look more root like, maybe using some brown paper.
Filtering out the bad words so that Glitchi’s language is more PG, because we want it to be conversationally friendly for everyone.
Making a few changes to the code so that the speech recognition runs a bit more smoother, because sometimes Glichi echos or says too many things at once or repeats the same questions. We would need to design the twitter code in a way that it is more accurate and find a way for him to respond a little faster.