Centre for HCI Design of City University London will host a talk by Dr. Dr. Norbert Streitz, the Scientific Director of Smart Future Initiative. He will give a talk about Human-centered Design in the Large: Smart Cities and Smart Airports. The details about the schedule and the venue are as follows:
Date: Thursday, 20/11/2014
Room C304, Tait Building, City University London
Northampton Square EC1V 0HB
London, United Kingdom
Below is the abstract of Dr. Dr. Norbet Streitz’s talk:
Entering the ‘Urban Age’ with more than half of the world population living in cities, economic prosperity and quality of life will largely depend on the ability of cities to exploit their full potential. With the deployment of ambient intelligence infrastructures, urban environments are transformed into interactive smart spaces. Combining information and experience spaces with ubiquitous computing in urban contexts results in what is being called ‘smart hybrid cities’.
Cities provide environments for different activities (e.g., living, working, shopping, entertainment, transportation, sojourning, communicating). At the same time, contemporary life styles become less focused and increasingly multidimensional. People’s lives are taking place betwixt and between multiple offers and options. People’s roles change within short time frames due to parallel activities in co-located situations. Airports are good examples of this blending of activities by providing a range of functions people are usually looking for in cities, but now for a limited time frame at this specific location. Airports serve as ‘transient spaces’ providing support for ‘polyphasic activities’. Translating this in an overall design rationale, one can state: “designing airports is designing transient smart cities”.
Against this background, the talk addresses issues and challenges for designing smart cities and their implications for transient spaces taking airports as one example. Contrasting the often technology-driven approaches, this talk will present a human-environment-interaction perspective for the challenge of urban life management. This includes the shift from information design to experience design which means for airports to address the passenger experience. Furthermore, we are arguing for a people-oriented, empowering smartness where smart spaces make people smarter by keeping the human in the loop. It requires also discussing the implications of sensor-based smart environments, especially in public spaces, for privacy. It might become a commodity people have to pay for and thus a privilege. The talk will build on a perspective or vision for reconciling humans and technology by arguing for a human-centered design approach resulting in Humane Smart Hybrid Cities where people can exploit their creative potential and lead a self-determined life.
About the Speaker
Dr. Dr. Norbert Streitz (Ph.D. in physics, Ph.D. in psychology) is a Senior Scientist and Strategic Advisor with more than 30 years of experience in information and communication technology. He is the founder and scientific director of the Smart Future Initiative (SFI) launched in 2009. Before, Norbert held positions as deputy director and division manager at the Fraunhofer research institute IPSI in Darmstadt. His research is in the areas of human-computer-interaction, hypertext/hypermedia, CSCW, ubiquitous computing, ambient intelligence, disappearing computer, smart environments and smart cities. It was carried out in projects funded by the European Commission, partners in industry and different foundations. Norbert also taught at the Department of Computer Science of the Technical University Darmstadt for more than 15 years. Before joining IPSI in Darmstadt, he was an Assistant Professor at the Technical University RWTH Aachen with research and teaching in cognitive science and ergonomics. This was preceded by his work in theoretical physics at the University of Kiel. Furthermore, he was a post-doc research fellow at the University of California, Berkeley, a visiting scholar at Xerox PARC and at the Intelligent Systems Lab of MITI, Tsukuba Science City, Japan. Norbert published/edited 20 books and authored/ coauthored more than 130 scientific papers. He serves on editorial and advisory boards, steering and conference committees, and as a consultant. He is regularly asked to present keynote speeches and tutorials at scientific as well as commercial events in Europe, US, Brazil, Qatar, Malaysia, Singapore, Korea, Hong Kong, China, and Japan. You can find more about him from the following link: Dr. Dr. Norbert Streitz Biography
At our lab we mostly focus on researching novel technologies that change the way we interact digitally. While some of our technologies, such as the Electric Taste Machine have had more academic resonance, we aim to develop technologies that have real world impact and which has resulted in several startup companies such as the recent RingU.
We are now proud to announce that Mixed Reality Lab member Marius Braun has also taken on an entrepreneurial role with his startup nudge.
nudge is a wristband that helps you keep track of the important things in life, and nothing else. It connects with your phone and acts as a filter for the notifications you receive. Whether this is you when you receive a vital email or text messages. It can even let you know if a particular website has updated, or there is an important calendar event you’re about to miss. And most importantly, it does nothing else. You get no further distractions from your phone, giving you more quality time with friends and family, more time to be creative and more time to enjoy your life.Marius co-founded the company nudge in April, which since has come a long way in development, pivoting several times along the way. In the picture below, the team has just won City University’s CitySpark competition, earning a £3000 prize (Marius on right), which has greatly aided the prototyping process. Marius & co will be launching the product in mid November on the crowd funding platform Kickstarter.
If you would like to follow an exiting startup story and find out how they are getting on you can sign up here.
“When I started out,” says David Levy, international chess champion and expert in artificial intelligence, “I didn’t know anything about artificial vaginas. It is quite extraordinary how much interest there is in that subject.”
Levy’s book, Love and Sex with Robots, is perhaps the fullest exploration of the future of humans and robots, especially their interaction in the bedroom. It explores the details of internet-linked devices that transmit real physical contact.
And Levy is no fantasist. He is the only person to win the Loebner prize – an annual competition to determine which chat software is the most realistic – in two separate decades, first in 1997 and again in 2009.
It was while researching his 2003 book, Robots Unlimited, that he first became interested in the subject. Specifically, he read a quote from a 1984 book by Sherry Turkel, a professor at the Massachusetts Institute of Technology. An interviewee, ‘Anthony’, told Turkel that he had tried having girlfriends but preferred his relationship with his computer.
“That quotation hit me like a brick wall,” says Levy. “I thought – if a smart guy could think like that in 1984, I wonder how much the concept of human-computer emotional relationships has developed since then.”
A great deal is the answer. Adrian David Cheok, Professor of Pervasive Computing at London’s City University, has been refining a device called a Kissinger: a set of pressure-sensitive artificial lips that can transmit a kiss from a real mouth to a similar device owned by a partner who might be thousands of miles away.
The Kissinger system has been in development for about eight years, with the latest model designed to plug into a smartphone. By kissing the screen, the movements of a person’s lips can be mirrored in the other machine and that kiss will be given to whoever has his or her mouth against a corresponding machine.
Several companies have shown an interest in the device and Cheok expects to see it hit the market in mid-2015.
Eventually, Cheok believes, “almost every physical thing, every being, every body, will be connected to the internet in some way.’’
The future, he says, will involve the subconscious part of the brain. We already have intimate data on the internet, but we still don’t feel that we can really know somebody online. There’s something missing between the experience of making a Skype call and meeting someone. And this is where transmitting the other senses is so important.”
Levy, 69, and Cheok, 42, have teamed up to work on a new “chat agent” – software that can understand and respond to natural human language and speech. The project, named I-Friend, will be based on artificial intelligence software that won Levy and his team the Loebner prize for a second time in 2009.
“It will be one of the most realistic artificial chat agents when the project is finished,” says Cheok.
Levy is keen to stress the versatility of the software they’re developing. The I-Friend, he says, can be configured for any embodiment and persona that the market requires.“It could, for example, be an upmarket toy such as a furry animal or a creature from another planet; or a web avatar that repeatedly turns the conversation to discuss a company and its products; or a mobile app such as a virtual girlfriend or boyfriend.”
Cheok adds: “In the first instance, it could probably replace all the phone sex for which people for some reason pay very high rates.” Ultimately, however, the aim would be for it to be “used in robots for artificial love and sex chat”.
And this is where the artificial vaginas come in.
“I believe it is going to be perfectly normal that people will be friends with robots, and that people will have sex with robots,” says Cheok. “All media will touch humanity.”
There is already a market for realistic-looking life-sized dolls made from a durable high elastometer silicone material. Female dolls either have fixed or removable vaginas and cost anything from $5,000-$8,000. But they don’t do anything. They are unresponsive.
In time, Levy predicts, it will be quite normal for people to buy robots as companions and lovers. “I believe that loving sex robots will be a great boon to society,” he says. “There are millions of people out there who, for one reason or another, cannot establish good relationships.”
And when does he think this might come about? “I think we’re talking about the middle of the century, if you are referring to a robot that many people would find appealing as a companion, lover, or possible spouse.”
Levy, a former Chess Master who represented Scotland, developed his interest in computing while studying at St Andrews university and later as a computer science postgraduate at the University of Glasgow, where he taught his students to program. During this time, he began looking into the programming of chess, which ultimately led to an interest in human-computer conversation.
He and Cheok’s “I-Friends” will have a sophisticated module which will endow the software with emotions, personality and moods. They aim to tailor the software to any required persona, for example a girlfriend or boyfriend who will be able to take part in continual and varied sexually-charged conversations.
I-Friends is a range of conversational software companions based on Artificial Intelligence. Its working name is “Do-Much-More”. Levy and Cheok currently are trying to commercialise this chatbot [a program designed to simulate intelligent conversation] by adding significantly to its conversational capabilities.
It will serve as a software core that can be configured for anything the market requires. It could, for example, discuss a company and its products; or a mobile app such as a virtual girlfriend or boyfriend; or a server based application with which cell phone users can interact via SMS messaging.The same core software can be used as the basis for any desired character, simply by changing the data that defines the persona.
“The very first chatbot was the famous ELIZA program written at MIT in the 1960s, named after Eliza Dolittle in George Bernard Shaw’s Pygmalion,’’ says Levy. “ELIZA did very little but caused a stir at the time and is well documented in the Artificial Intelligence literature. Our first chatbot program had the name Do-A-Lot because it did more than ELIZA. Our second generation chatbot does even more, and was therefore given the working name Do-Much-More.’’
Levy says consumers eventually will be able to experience “appropriately designed artificial genitalia’’ that feel and behave like the real thing.
“There will be body warmth, synthesised speech, moving limbs. The first sex robots will be primitive in quality but with time more sophisticated ones will be available.’’
Do-Much-More delivers a significant leap in performance relative to the original Do-A-Lot software. That leap has been achieved by retaining the original strengths of Do-A-Lot, enhancing its power by extending its system of “variables” (word types) and its morphology (for example by the inclusion of phrasal verbs), and increasing the sophistication of its response generation system through the use of two important lexical resources that have been developed within the Computational Linguistics community in the academic world: WordNet and ConceptNet.
WordNet is a semantic lexicon for the English language. It groups English words into sets of synonyms called synsets, provides short, general definitions, and records the various semantic relations between these synonym sets.
The purpose is twofold: to produce a combination of a dictionary and thesaurus that is more intuitively usable, and to support automatic text analysis and artificial intelligence applications. The database and software tools have been released under a formal license and can be downloaded and used freely.
ConceptNet is knowledge-based, created as part of the Open Mind Common Sense project, which is an artificial intelligence scheme based at the Massachusetts Institute of Technology Media Lab. The goal is to build what’s known as a large “common sense knowledge base’’ developed from the contributions of many thousands of people across the web.
“We employ WordNet to provide Do-Much-More with certain useful linguistic data about words, helping us to generate responses that generally appear to be natural in terms of word association,’’ says Levy. “And we employ ConceptNet to provide Do-Much-More with real-world commonsense information so that Do-Much-More sometimes appears not only to understand what the user is saying but also to know something about the subject.’’
Cheok likens this development to the early days of mobile telephones.
“There were these businessmen with these bricks and you thought it so geeky and who’d ever want to use that?’’ he says. “Initially, some technologies are a niche market. But once enough people use it you have a kind of bandwagon effect. Now, sure you can choose not to have a mobile phone, but because everyone else has got one, it’s become the new social norm. So I think a lot of these technologies will become like that – including robotics and mixed reality and all these things that people initially might find a little bit scary.’’
Correction: An earlier version of this article stated that David Levy was the only person to win the Loebner prize twice. He is in fact the only person to win it in two separate decades.
Human communication often encompasses a mixture of senses. People connect with one and other through a combination of sight, sound, smell, taste and touch. The virtual world aims to become a popular mode of communication as individuals form bonds with people across the world. This technology currently engages two of the senses – sight and sound. However, Professor Adrien David Cheok, director of the Mixed Reality Lab, believes that virtual communication may one day embrace all the human senses – making it a truly physical experience. Professor Cheok is at the forefront of this technology, integrating touch, taste and smell into current technology. ‘Telepresent technology’ may help form and maintain relationships at distances in an increasingly globalised world. Combining pre-existing mobile technology and a plug-in device, the Scentee provides smell based notifications to the user. Designed by Professor Cheok, the small bulb like device releases scents from cartridges. For example a user may choose to set their alarm to wake up to the smell of coffee, or they may receive a certain smell depending on who contacts them. Professor Cheok’s Scentee has proved popular in Japan on a commercial scale, and has more recently become available worldwide. Whilst digitilising this chemical sensation is challenging, Professor Cheok aims to further the technology by manufacturing a magnetic coil that sits near the olfactory bulb (part of the brain responsible for interpreting smell). This would stimulate the artificial perception of smell. “It is actually true that a smell can subconsciously change your mood, so they are very important senses that you can bring to the internet.”
Taste is another sense which Professor Cheok aims to bring into the virtual world. He has developed a device which stimulates the tongue through electrical impulses. It may recreate sweet, sour, salty or bitter sensations. Using different combinations of heat and amperage Professor Cheok and his team are experimenting to develop a host of different tastes through the device. The team envisions a future where family members may be able to experience eating together at the dinner table from the other side of the planet.
Touch is the final sense in the physical jigsaw. Through such behavior as hugging, touch has the ability to comfort and create a sense of safety. Professor Cheok created the ‘Huggy Pajama’ designed primarily for parents who may want to send hugs to their children when away at work. Connected through the internet, the wearable jacket is filled with air pockets and heating components that inflate and warm in areas that help recreate the sensation of a hug. However, the virtual sensation of touch may be more subtle. Professor Cheok also helped design the RingU, described as the first ‘tele-hug’ ring. The device aims to bring friends, partners or family members closer together by providing a subtle hugging sensation on the finger. Through the internet, the user may send a signal to their companion’s RingU. The receiving ring then squeezes, providing a simple, effective message that the person is thinking of them. Users of the RingU may also change the intensity of the sensation and the colour that the ring emits, depending on the emotion that they want to convey.
Professor Cheok believes people may move from the age of information into the “age of experience”. He believes that, as this technology develops, virtual communication may become a fully immersive physical experience – important in the future of online communication. His goal is to “go beyond the chemicals” and create a fully integrated, immersive virtual experience. Individuals may therefore socialise and communicate with all of their senses through the internet. Rather than receiving a descriptive text of a trip to the pub, individuals may one day virtually experience the atmosphere through online communication. The Scentee, RingU and taste technology all mark the beginning of this complete, physical digitalisation of the senses. Professor Cheok and his colleagues are fast developing the technology to find novel ways to bring telepresence to the public.
How else might telepresent technology help bring people closer together?