Until 2020 I was a principal investigator at the Institute of Neuroinformatics, which is part of the Faculty of Science of the University of Zurich and also the Department of Information Technology and Electrical Engineering (D-ITET) of the Swiss Federal Institute of Technology (ETH) Zurich. At the moment I am inactive in research due to commitments with startups, although I still mentor technology transfer cases and teach at the Uni Zurich Innovation Hub. Some of my projects are described below. My old profile page is here (not up to date).
Ada – Intelligent Space
I worked on this project during my PhD, along with a large team of over 20 people. Ada was shown to about half a million people at the Swiss national exhibition Expo.02 in Neuchatel from 15 May 2002 to 20 October 2002. This project can be described as part robot, part performance art, part public experiment, and part research project. It is described in more detail on its official web page, but think smart disco meets intelligent room and you will get the general idea. Related work on building intelligence has also been done here.
Some of the things we built and found out during the project include:
Interactive illuminated pressure-sensitive floor [Delbruck et al. 2007].
It was possible to influence where people as a group moved in the space using cues, without them noticing that this was occurring [Eng et al. 2005].
Changes in ambient settings of the space affected visitor moods and attitudes to the space [Eng et al. 2006].
More than 20 years later, the issues we explored in the project are still relevant. The project was a forerunner of today's (and tomorrow's) world of IoT for managing building efficiency and occupant well-being.
You can see me here giving a talk about Ada on TEDx. Giving this talk was an interesting experience, not least because my videos didn’t work (even though I had tested them beforehand) and I had to ad-lib my way through most of the talk describing what was in the non-existent videos. Luckily, the organizers managed to edit the video afterwards so that you couldn’t really tell the difference.
Neuroscience of Virtual Reality
The key questions I'm interested in are:
What happens in the brain during virtual reality (VR) interactions?
How can this processing be manipulated?
Some of the things we have discovered are summarized in headline form below (some links lead to paywalled journal websites):
If you imagine doing something while it is shown to you in a first-person view, there is a measurably increased physiological response to an unexpected stimulus [Haegni et al. 2008].
The more that you feel that you own a virtual body, the slower time seems to pass in a virtual environment (i.e. you seem to get lost in the interaction) [Haegni et al. 2007].
Brain activity during imagining, observing and imitation of VR actions can be read out via functional near-infrared spectroscopy, but this is hard and the signal is not very good [Holper et al. 2010] – there are also differences in signal variability between those with high and low brain activity responses [Holper et al. 2012].
If you put a first-person view of virtual arms in the correct location relative to your own viewpoint rather than on a normal monitor in front of you, brain activity in certain regions is enhanced [Kobashi et al. 2012]. To discover this result, we developed a simple mirrored display combined with a normal monitor [Eng et al. 2009].
Brain activation during virtual leg movements is enhanced by combining VR observation and mental imagery [Villiger et al. 2013].
Virtual Reality Neurorehabilitation
The basic research on virtual reality is applied in clinical work on interactive virtual reality training for patients with stroke, cerebral palsy, spinal cord injury and other conditions. There is also a video explaning the basic concept behind the VR rehabilitation system below. If you want to know more about virtual reality applied to rehabilitation, you may want to consider joining the International Society for Virtual Rehabilitation (I was on the board from 2012-2017). It publishes a regular newsletter and there is a Facebook group for all people interested in the topic.
Neuromorphic Engineering is, to put it briefly, the exploitation of key principles by which brains work to build artificial computing systems that work in the real world. The long-term bet is that these systems will show large advantages in terms of decision robustness in uncertain environments, reaction time, and energy efficiency.
I have been involved in a relatively small way in some projects:
Cerebellum chip: this paper describes a hardware simulation of a part of the circuitry comprised by the cerebellum and associated deep brain nuclei. This circuit is of interest because it may be involved in the regulation of fine timing of motor responses. [Hofstötter 2004]
Topology learning: imagine a highly distributed sensor system that is placed somewhere at random. These sensors need to work out where they are relative to the other sensors. Under certain situations, they can do this using the correlations between their sensor inputs. This paper demonstrates the learning principle using tactile data from the interactive floor used in the Ada project and the neuromorphic vision sensor we are now selling via the spin-off company iniVation. [Boerlin 2009]
Reinforcement learning for robot locomotion: this was my undergraduate thesis project, conducted together with Alec Robertson. In 1996, we built a 6-legged robot (see picture) that could learn to walk as fast as possible. It learned to synchronize its leg movements to avoid falling on its belly. The robot learned to walk on flat surfaces and up an inclined plane. It was controlled via PWM on the parallel ports on two PCs, which were networked together via their serial ports.