What if cities have dedicated urban interfaces in public spaces that invite people to share stories and
memories of public interest, and facilitate the creation of a public narration? What if people share and
access these stories and memories while chatting with a public bench? Will the interaction with the
bench provide a meaningful, memorable and playful experience of a place?
The Bench of Multi-sensory Memories is an urban interface whose objective is to investigate the role
of urban media in placemaking. It mediates the creation of a public narration, and affords citizens a
playful and engaging interface to access and generate stories and memories that form this narration.
The bench has been designed and fabricated in collaboration with the Malaysian artist Alvin Tan,
which has experience with bamboo installations in public spaces. Its structure is robust and it allows
to easily and safely allocate all the hardware components. The hardware and software system
consists of: a) input devices, the USB Microphone and the Force Sensitive Resistor (FSR) sensors; b)
Analog-Digital or Digital-Analog (AD/DA) Converter Module Board; c) Microcontroller, a Raspberry Pi
3; d) output device, a Speaker; e) the software, the Google Speech API. The components operate as
following: the FSR sensors detect the presence of a person in the bench through physical pressure,
weight and pressing; the AD/DA Converter Module Board read the analogue values of the FSR
sensors and convert them into digital values, readable by the Microcontroller; the Microcontroller,
which has advanced features, such as the Wi-fi, Ethernet, Bluetooth, USB, HDMI and Audio Jack,
easily connects the inputs and output devices. Currently, the system software implements speech
applications, such as text-to- speech and speech-to- text: Google Speech API generates the voice
based on the text, records the voice and translates the speech into text, through the output and input
devices. Therefore, at the moment, the system performs a scripted sequence that includes text-to-
speech and speech to text translations. In the short term, we will be able to employ a custom chatbot,
that is able to conduct interactive and meaningful conversations.