Context: With the rise of augmented reality technology, there is an opportunity to explore new surfaces for layering content. What would happen if we layered our data today oral conversations with additive expressions? In 2030, Tak Tak takes the role of exploring the social and technological implications of subverting our internal meanings onto visible surfaces over our bodies.
Product: Tak Tak is a conversational interface that listens to your conversation and augments expressions to emphasize, exaggerate, express meaning. By augmenting additional expressions to the peripheral of the conversation, people can emphasize points of the conversation and express themselves.
Each Tak Tak integrates AI technology and machine learning in order to listen, interpret and react to parts of the conversation. Tak Tak gathers its user’s conversational data to learn and adapt its modes of expression. Using woven-emotional sensor technology, Tak Tak projects animations for the listener to see and interpret the speaker’s intended meaning. Tak Tak begins as a visualization of users tone of voice. With machine learning and branding, Tak Tak can evolve its expressions and meaning-making over time.