Toronto 'cyborg' Steve Mann says he was assaulted in Paris McDonald's
Steve Mann was wearing his digital eye glass and trying to eat his ranch wrap in the peace of augmediated reality when things got really strange at a Paris McDonald's. Mann, who has been called the father of the wearable computer, is a University of Toronto professor and inventor of the EyeTap. The device, worn in front of his eye, is attached to his head by a strip of aluminium and nose pads. It acts as a camera to record what his eye sees and can display computer information, possibly altering what Mann sees as well. It also makes him look like the cyborg others claim he is. Read more
The Natural History Museum - more famous as the home of ancient fossils - is using augmented reality in a novel way to reanimate extinct creatures. With the aid of hand-held computers, and a new interactive film, viewers will be able to see extinct hominids and dinosaurs walking around the room they are sat in. Read more
Researchers from Georgia Tech have devised methods to take real-time, real-world information and integrating it onto Google Earth, adding deep dynamic data to the previously sterile Googlescape, using live video feeds from many angles to find the position and motion of various objects, which they then combine with behavioural simulations to produce real-time animations for Google Earth or Microsoft Virtual Earth. Weather, birds, and following rivers will follow soon.
Augmented reality to help astronauts make sense of space Life aboard the International Space Station is hard work. Crewmembers have a multiplicity of complex tasks, potentially involving thousands of tools, components and other items. But ESA astronaut Frank De Winne has begun testing the prototype of an unusual helper designed to make astronaut life easier. The ESA-designed Wearable Augmented Reality (WEAR) is a wearable computer system that incorporates a head-mounted display over one eye to superimpose 3D graphics and data onto its wearers field of view.
Reconstruct Mars automatically in minutes! A computer system is under development that can automatically combine images of the Martian surface, captured by landers or rovers, in order to reproduce a three dimensional view of the red planet. The resulting model can be viewed from any angle, giving astronomers a realistic and immersive impression of the landscape. This important new development was presented at the European Planetary Science Congress in Potsdam by Dr Michal Havlena on Tuesday 15 September.
"The feeling of 'being right there' will give scientists a much better understanding of the images. The only input we need are the captured raw images and the internal camera calibration. After minutes of computation on a standard PC, a three dimensional model of the captured scene is obtained" - Dr Michal Havlena.
The growing amount of available imagery from Mars makes the manual image processing techniques used so far nearly impossible to handle. The new automated method which allows fast high quality image processing, was developed at the Centre for Machine Perception of the Technical University of Prague, under the supervision of Tomas Pajdla, as a part of the EU FP7 Project PRoVisG. From the technical point of view, the image processing consists of three stages: the first step is determining the image order. If the input images are unordered, i.e. they do not form a sequence but still are somehow connected, a state-of-the-art image indexing technique is able to find images of cameras observing the same part of the scene. To start with, up to a thousand features on each image are detected and 'translated' into visual words, according to a visual vocabulary trained on images from Mars. Then, starting from an arbitrary image, the following image is selected as such if it shares the highest number of visual words with the previous image. The second step of the pipeline, the so-called 'structure-from-motion computation,' helps scientists determine the accurate camera positions and rotations in three dimensional space. To do this for each image pair representing neighbouring frames, it is enough to find 5 corresponding features to obtain a relative camera pose between the two images. The last and most important step is the so-called 'dense 3D model generation' of the captured scene, which essentially creates and fuses the Martian surface depth maps. To do, this the model uses the intensity disparities (parallaxes) present in images taken at two distinct camera positions, which were identified in the second step.
"The pipeline has already been used successfully to reconstruct a three dimensional model from nine images captured by the Phoenix Mars Lander, which were obtained just after performing some digging operation on the Mars surface. The challenge is now to reconstruct larger parts of the surface of the red planet, captured by the Mars Exploration Rovers Spirit and Opportunity" - Dr Michal Havlena.
Augmenting Aerial Earth Maps with Dynamic Information : To appear in IEEE ISMAR (International Symposium on Mixed and Augmented Reality) 2009, Orlando, Florida USA: Using crowd-casted videos, we generate a dynamic alive city in augmented virtual earth maps.
- Authors: Kihwan Kim, Dr. Sangmin-Oh, Jeonggyu Lee and Professor Dr. Irfan Essa(Director) - Narration and special thanks to : Dr. Nick Diakopoulos
Imagine seeing interesting information pop up as you stroll around. It is almost like a sixth sense, and it used to be mainly the stuff of science fiction. But Augmented Reality (AR) - in which live video images like those from mobile phone camera are tagged with relevant data - is starting to be widely available. Read more
Up to now virtual reality has proved cumbersome as a design tool, but European researchers are finalising a system that brings virtuality to the wider world. Virtual reality (VR) is a powerful tool, but its true potential remains unrealised. Applications mixing the virtual and real worlds, called mixed or augmented reality (AR), are weak. There are few, reliable systems, and what exists are very expensive. Collaboration is limited and still relatively unsophisticated. And the state of the art is anchored to the desktop or multi-tiled, or multi-screen, displays. Both are fixed solutions.
Andrei Lintu and Marcus Magnor, from the Max Plank Institute for Informatics in Saarbrucken, Germany, have created a tool to project a image upon another image seen through the eyepiece of a telescope. Augmented Reality has been used before with such things as virtual headsets in the operating theatre, and with futuristic aeroplane cockpit headup displays. A computer generated image is overlaid on what you see.
The new augmented-reality system combines a customised planetarium software, a motorised telescope, a portable computer and a custom-made projection unit.
Left: Standard Eyepiece View . The image shows the visual appearance of the Andromeda Galaxy (M31) through an eyepiece. Right: Augmented Eyepiece View. A sight of the same field of view with the overlaid image and additional information blended into the upper left corner.