Project Meeting Points is conceptualize as set of philosophical discussions between two robots . In its first appearance, conceptual pillars for the discussions are philosophical stand points of Aristotle and Nietzsche considering various topics. We used several criteria to select them for our first Meeting Points; importance of the historical periods they were belonging as leading thinkers of their time, improbability to compare their opinions about virtues and human characters, significant influence on revolutionary Renaissance and Dadaistic art movements. As such, It is historical and epical discussion between Aristotel’s Ethical Robot (Magnanimous) and Nietzche’s Overman Robot (Übermensch).
The Interactive Installation Meeting Points does not have any pretensions to be classified as artwork but rather “anti-art” as tends to criticize contemporary aesthetic, cultural and social changes as result of mutual interaction between people and technology.
Central characters of this interactive socio-critical drama are Ethic Robot and Overman Robot. The first one is feed with knowledge collected from some of Aristotle’s main publications such as Nicomachean Ethics, Poetics, Politics, Metaphysics and second one from Nietzsche’s Thus Spoke Zarathustra, The Antichrist, Beyond Good and Evil , The Birth of Tragedy and Ecce Homo.
We are trapped in circles of information, without facts only with interpretations, as Ethical Robot and Overman Robot are trapped on the roundabout of meanings, symbols, and metaphors they are mixing but do not understand, without facts only machine interpretations. As such tragedy in the installation Meeting Points: Übermensch and Magnanimous is all about how to destroy and rebuild our knowledge and technology addicted society till something good come up.
Key technical novelty presented in Interactive Installation Meeting Points is the combination of chatbot technologies and Recurrent Neural Network (RNN) models that will enable reinforcement learning in order to create artificial conversational agents who will achieve human level performance. The fact, that things can communicate with each other and with the humans enables unsupervised learning and reinforcement learning and knowledge multiplying opportunities.
The Neural Conversational Agent technologies allows us to transform everyday “things” into “smart objects” that can understand and react to their environment. A step further in defining the architectural principles of “smart objects” are cloud speech recognition and speech synthesis technologies that allow increasing the interactivity and raise the level of interaction between people and “smart objects”. Motivated by the technological era in Interactive Installation Meeting Points, we will use two Neural Conversational Agents. In order to create philosophical discussions between two robots, we will use Recurrent Neural Network (RNN) models . RNN are a character-level language models.
We will train RNN with a Nietzsche and Aristotle chosen texts and RNN will model the probability distribution of the next character in the sequence given a sequence of previous characters. Hence, this will allow us to generate new text one character at a time as shown on (Figure 1). An example RNN with 4-dimensional input and output layers, and a hidden layer of 3 units (neurons) from  We will use standard Softmax classifier  and RNN will be rained with mini-batch Stochastic Gradient Descent . Applying the chatbot technology using as conversation base created Neural Network Nietzsche and Aristotle models we will create a Neural Conversation Nietzsche cyber clone and Neural Conversation Aristotle cyber clone (Figure 2).
Those avatars will be deployed to the two separated internet access points. Using a Raspberry pi devices, we will connect robots (smart objects) with Neural Conversation clones access points (robot brains) and enable philosophical discussions between two robots using a Neural chatbots, speech recognition and speech synthesis technologies.
By applying this concept, things converted to the “smart objects” will obtain a distinctive personality, intelligence, and decision-making ability. Key novelties in the Installation Meeting Points: Übermensch and Magnanimous are:
- Humanless creative process conducted by Artificial Conversational Agents.
- Using Cyber Clones as creative and artistic medium.
- Art of AISense or Machine-Context Art.
- Robot-Robot Interactions as new interaction phenomena and Human Third-Party Neo Technological Experience.
- Unsupervised and reinforcement learning of conversation agents.
Dr Anders Ynnerman moves his hands over the images on the touchscreen, slicing, dicing and rotating, revealing layer upon layer of skin, muscle and bone.
The audience watches in awe.
They’re looking at a full-body scan of a traffic accident victim, and they can see every injury in larger-than-life detail, including the cause of death: a broken neck, caused by a blow to the head.
Dr Ynnerman, a scientific data visualisation expert and the director of the Norrköping Visualisation Centre in Sweden, was speaking at a session on virtual and augmented reality (AR/VR) at EmTech Asia.
The conference, organised by the MIT Technology Review to explore global emerging technologies, was held in Singapore from 14-15 February 2017.
These full-body scans or ‘virtual autopsies’, he said, are treasure troves of medical data; he takes great care to present them with the utmost respect for the persons who died under tragic circumstances.
Touched by Data
The datasets are generated through computerised tomography (CT) scans, which produce around 25,000 slices of data that together form a full virtual replica of the human patient.
Dr Ynnerman uses mathematics and computer graphics to combine these slices into a huge block of data that can be visualised in three dimensions, and manipulated using touch interfaces.
The technology is clearly a huge boon for medical schools and hospitals.
But he quickly realised its potential for communicating science to the general public. His team has gone on to scan all manner of museum artefacts, including Egyptian mummies, fossils and a wide range of animal specimens.
They have placed touchtable devices in museums around the world — including the Science Centre Singapore — for visitors to interact and play with the data.
“We’ve gone from very basic mathematical principles all the way out into the museum gallery. I see children exploring scientific data. When they interact with it they’re getting interested in the content, but they’re also getting interested in the technology.”
Dr YnnermanìI added: ” That’s the best reward you can get as a professor. Seeing kids playing with your stuff is much more important than getting citations and papers!”
Kissed by innovation
Also speaking at the session was Dr Adrian David Cheok, director of the Imagineering Institute in Malaysia and Chair Professor of Pervasive Computing at the City University of London.
(Dr Cheok was formerly a professor at the National University of Singapore, where he founded the Mixed Reality Lab; it has since moved to London.)
Because non-verbal interactions make up a large part of how humans communicate, talking to someone over the internet still pales in comparison to meeting him or her in person, said Dr Cheok.
Thus, his goal is to develop tools that let us perceive the world through the internet using all of our five senses including touch, taste and smell.
“In the future, we’ll move from the age of information, where we are today, into the age of experience,” he said.
“You can share any experience through the internetóyou can feel, taste and smell what it’s like to be anywhere in the world.”
His group has developed a range of devices aimed at letting you do just that.
For those who crave touch, there is the Huggy Pajama, a wearable device that lets parents and children exchange virtual hugs.
More recently, the group introduced the Kissenger, which does exactly what you think it does — the silicon lip-like device connects to your smartphone and lets you kiss someone over the internet.
Smells like Tech spirit
Besides touch, the senses of taste and smell are also powerfully evocative.
“Taste and smell are directly intermingled with the limbic system of the brain, which is responsible for emotion and memory,” said Dr Cheok.
“For example, smell can trigger a memory of your grandmother, or trigger an emotion — it can make you feel happy or sad if the same smell has done that in the past.”
Engineering taste and smell, however, is no trivial task.
Dr Cheok’s team has developed smartphone devices that let users send their friends smells over the internet; these involve the use of an atomiser and scent cartridges. But such devices have drawbacks — only one smell can be sent per cartridge, and cartridges have to be replaced once they run out of scent.
Thus, instead of resorting to chemicals, Dr Cheok is working on ways to directly stimulate the tastebuds or olfactory (smell) receptors.
By placing an electrode on your tongue, for example, he can deliver electrical signals that make you taste something sour; yet another device generates a sweet taste by stimulating the appropriate receptors on the tongue with heat.
Similarly, the team has also tried to electrically stimulate the olfactory receptors inside the nasal cavity to recreate smells.
Wearing this device, however, is still a little uncomfortable.
Although such devices may not be available on the mass market just yet, Dr Cheok believes that it is only a matter of time before they find their way into our homes.
“People really want to experience all of the five senses — they want to be able to have dinner with their grandmother even if she’s on the other side of the world.”
By CUBADEBATE – 12 marzo 2017
Computer expert Adrian David Cheok has created the Kissenger prototype, an invention that will allow long-distance kissing, according to the BBC .
According to Cheok, who heads the Imagineering Institute in Nusajaya Johor , Malaysia, while the initial idea was a device to connect families, “the greatest interest comes from couples” living separately.
“The initial prototype was born in 2003 and after several tests, in 2015, they reached the current design, consisting of a phone casing that connects to the audio jack of the iPhone, iPod or iPad,” he explained.
The creator said that the application will only be available for devices that have the iOS operating system and although the device is in its prototype phase, it is expected that by the end of the year will hit the market.
Kissenger comes from the combination of ‘Kiss’ (kiss) and ‘ssenger’ (short for messenger).
For affective communication to take place, the two participants must have the device and download the application.
The phone is inserted into the holder which contains a silicone area with high precision force sensors capable of measuring the force exerted by the lips during the kiss.
Then, through the application, the device sends this data in real time to the recipient’s device.
(With information from Prensa Latina)