Unfamiliar Convenient: Project in Progress

Year: 2021 — ~
Collaborators: Claire Glanois, object design in collaboration with Jiawen Yao

Unfamiliar Convenient is a research-creation endeavor that aims to shed a dim light on the friction between the two often coupled definitions, the internet of things, and the smart home, in turn addressing how the user-centeredness of the latter brings down the former. Our inquiry is aimed at shifting the limits of smart home in order to expand its range of functions—or preferably behaviors—by reconsidering smart objects as active subjects. The project's first embodiment, Case Study 1, is an interactive voice agent whose main purpose is to question the notion of 'self'.

Unfamiliar Convenient is a collaborative research-creation initiative by Claire Glanois (post-doctoral researcher in deep reinforcement learning at the joint JiaoTong University — University of Michigan research programme in Shanghai, and research associate at the REAL: Robotics, Evolution and Art Lab in Copenhagen), and Vytas Jankauskas, Head of Research at CAC Lab, Chronus Art Center, Shanghai. Object design was made in collaboration with Jiawen Yao.

In an interview for SQM: The Quantified Home, published as part of 2014 Biennale Interieur, Kortrijk, Belgium, Bruce Sterling expressed his personal disappointment the smart home, emphasising however, that the internet of things (iot) remains a peculiar and somewhat promising concept.

Nonetheless, within the context of global lockdowns, reduced movement, online services on steroids, and domestic spaces more than ever repurposed to accommodate work and compensate for the lack of accustomed leisure, connected objects are ought to further claim their ground. Furthermore, a revived and updated interest in more-than-human ecologies, object-oriented ontology, redefining our relationship with both natural as well as the non-living entities we humans bring about, gives little slack to iot as a paradigm. 

Currently home automation is inherently aimed at optimisation. Beyond facilitating universally unpleasant and time-consuming duties such as vacuum cleaning, new systems aim at each and every bit of mundane, from automated window blinds to voice-assistant-controlled shoe laces. Products distance themselves from holistic interactions with (non)humans as we patronise Alexa, turn on this, buy me that, entertain me. Therefore current imaginaries of the smart home fail to leave behind the dogmas of the seamless, lazily refusing to explore funkier roles for technology.

Unfamiliar Convenient  aims to shed a dim light on the friction between the two often coupled definitions, the internet of things, and the smart home, in turn addressing how the 'user'-centeredness of the latter brings down the former. The attitude of our inquiry is not to immediately decouple the notions, but rather to address the limits of smart home in order to poetically and practically expand its range of functions, or preferably behaviors, to be able to reconsider smart objects as active subjects. In other words, we will embed behavioral patterns beyond consumer-world-oriented reasoning, hence at times potentially absurd and useless, in an attempt to reassess the established scenarios of conventional techno-domestic futures.

The process of the inquiry consists of the following key steps; (1) the hacking, repurposing, and behavior expansion of conventional connected home objects, often completely stripping them of primary utilitarian functions; (2) deployment of the modified devices in public environments to document interactions and gather feedback; (3) a series of workshops with the public aimed at collectively exploring devices theoretically and practically; (4) open-sourcing resources developed during the process; (5) documentation and eventual publishing of relevant findings.

Case Study 1

Case study 1 (cs1)—the first device to undergo the above-outlined process—is an interactive voice agent. Its main purpose is to question the notion of purpose. Running on an open-source framework of an interactive voice assistant, this particular case is not designed to fulfill human needs, instead using interactions as triggers for expanding its own knowledge.  

Every time a question is asked (“tell me about extinction”), the device will dissect the query into keywords and check whether any of them are related to previously registered concepts. After semantic proximity between pool of keywords is assessed, the device will choose among the closest ones (i.e. extinction and previously acquired lithium) and query the combination online (extinction lithium). The device will announce the findings and ask the human for their (optional) opinion regarding the subject, to further contextualise the inquiry (i.e. lithium extinction query online might refer to the battery industry, whereas a human might address broader impact on climate change). Finally, keywords from follow-up human input will again be extracted and added to the previous query. The sum of queries and retrieved information stored, the voice agent will give its own opinion towards the newly acquired concepts, and how these might (or might not) relate to the device’s previous knowledge graph. 

Over time, as cs1 enriches its database and assesses which new concepts are useful and which are not, the device will likely lean towards certain semantic biases (i.e the knowledge pool might drift towards notions around climate, but might also diverge towards technology, or not make any sense whatsoever, depending on queries and obtained results). Such shifts might be anthropomorphically perceived as character.

The scope of cs1 is to substitute the existing features of a voice assistant with the relatively new procedures of AI open-endedness (often situated within the re-emerging AI research subfield of Artificial Life), giving a voice agent an opportunity to evolve unsupervised, hopefully beyond an utilitarian consumer artefact. While the eventual results might not introduce directly deployable new use cases for smart home devices, we hope that they might expand the visions of what domestic technology, and more globally, AI in everyday life, might become. Although human-triggered interactions and notions—such as statement of opinion—remain highly anthropomorphic, we are also hopeful that over time cs1, due a different-to-human way of correlating between concepts, will detach from our hard-coded structure of thinking and introduce novel semantic wanders.

Technology

Cs1 runs on Mycroft open-source voice assistant platform, a machine-learning-enhanced voice-to-text-to-speech engine with further voice assistant accustomed features such as skills and external API integration.

The concept of self as we call it, is enabled by WordNet::Similarity that uses machine learning to assess semantic correlations between words. As an initial pool of keywords to build a similarity graph from, cs1 was provided with its own technical user manual (the general manual of Mycroft platform). Opinion statements by cs1 are generated using natural language processing, or specifically via OpenAI’s GPT-2 machine learning algorithm, periodically trained with the data obtained from device interactions. The latest work-in-progress code is available on gitHub.

Hardware-wise, we use a lattePanda Alpha 864s (thanks to hardware support by Shanghai-based DFRobot, running Ubuntu Linux 20.04). Other parts include condenser microphone and speakers.

Design

Material design of the current version is aimed at modularity and transparency. The aesthetic is loosely inspired by modernist object designs due to their open-mindedness towards everyday materials as opposed to opaque slick plastic casings that look good in renders but heavily dissonate as soon as they enter our quirky imperfect homes. 

Due to the changing nature of cs1, its aim at evolution, all elements are seen as work-in-progress, in which new modules can be added along the process, or features currently seen as indispensable in voice assistant frameworks (microphones, speakers, connectivity) entirely eliminated over time. The device can also be flipped upside-down to prevent the microphone from listening, turned sideways for accessibility to its ports, and so on. 

Tackling the Latourian black box, opaque and connoted with the surveillance apparatus, cs1’s elements and connections are fully exposed, allowing to investigate and disconnect elements upon will (microphone or speakers can be unplugged with the rest of the framework remaining in operation, a display can be connected, etc.)

Next Steps

The device is nearing its deployment phase. Firstly, it will be deployed in our own home for additional adjustments, after which a stable version will be available for public deployment. We are also working on a lighter workshop framework which will run on the latest version of Raspberry Pi.

Unfamiliar Convenient: Project in Progress

Year:
2021 — ~

Collaborators:
Claire Glanois, object design in collaboration with Jiawen Yao