Using geometry to help robots map their environment - article


Feature based graph-SLAM in structured environments
P. de la Puente, D. Rodriguez-Losada

To get around unknown environments, most robots will need to build maps. To help them do so, robots can use the fact that human environments are often made of geometric shapes like circles, rectangles and lines. This paper presents a flexible framework for geometrical robotic mapping in structured environments.

Most human designed environments, such as buildings, present regular geometrical properties that can be preserved in the maps that robots build and use. If some information about the general layout of the environment is available, it can be used to build more meaningful models and significantly improve the accuracy of the resulting maps. Human cognition exploits domain knowledge to a large extent, usually employing prior assumptions for the interpretation of situations and environments. When we see a wall, for example, we assume that it’s straight. We’ll probably also assume that it’s connected to another orthogonal wall.

This research presents a novel framework for the inference and incorporation of knowledge about the structure of the environment into the robotic mapping process. A hierarchical representation of geometrical elements (features) and relations between them (constraints) provides enhanced flexibility, also making it possible to correct wrong hypotheses. Various features and constraints are available, and it is very easy to add even more.

A variety of experiments with both synthetic and real data were conducted. The map below was generated from data measured by a robot navigating Killian Court at MIT using a laser scanner, and allows the geometrical properties of the environment to be well respected. You can easily tell that features are parallel, orthogonal and straight where needed.

| More

What do teachers mean when they say ‘do it like me’? - article


Discovering relevant task spaces using inverse feedback control
Nikolay Jetchev, Marc Toussaint

Teaching robots to do tasks is useful, and teaching them in an easy and non time-intensive way is even more useful. The algorithm TRIC presented in this paper allows robots to observe a few motions from a teacher, understand the essence of what the demonstration is, and then repeat it and adapt it to new situations.

Robots should learn to move and do useful tasks in order to be helpful to humans. However, tasks that are easy for a human, like grasping a glass, are not so obvious for a machine. Programming a robot requires time and work. Instead, what if the robot could watch the human and learn why the human did what he did, and in what way?

This is a task that we people do all the time. Imagine you are playing tennis and the teacher says ‘do the forehand like me’ and then shows an example. How should the student understand this? Should he move his fingers, or his elbow? Should he watch the ball, the racket, the ground, or the net? All these possible reference points can be described with numbers. The algorithm presented in this paper, called Task Space Retrieval Using Inverse Feedback Control (TRIC), can help a robot learn the important aspects of a demonstrated motion. Afterwards, the robot should be able to reproduce the moves like an expert, even if the task changes slightly.

The algorithm was successfully tested in simulation on various grasping and manipulation tasks. The figure above shows one of these tasks in which a robot hand must approach a box and open the cover. The robot was shown 10 sets of trajectories from a simulated teacher. After training, it was then asked to open a series of boxes where the box is moved, rotated, or of a different size. Overall, TRIC was very good on these scenarios with 24 successes out of 25 tries.

| More

Related posts


ManyEars: open source framework for sound processing - article


The ManyEars open framework
François Grondin, Dominic Létourneau, François Ferland, Vincent Rousseau, François Michaud

Making robots that are able to localize, track and separate multiple sound sources, even in noisy places, is essential for their deployment in our everyday environments. This could for example allow them to process human speech, even in crowded places, or identify noises of interest and where they came from. Unlike vision however, there are few software and hardware tools that can easily be integrated to robotic platforms.

The ManyEars open source framework allows users to easily experiment with robot audition. The software, which can be downloaded here, is compatible with ROS (Robot Operating System). Its modular design makes it possible to interface with different microphone configurations and hardware, thereby allowing the same software package to be used for different robots. A Graphical User Interface is provided for tuning parameters and visualizing information about the sound sources in real-time. The ManyEars software library is composed of five modules: Preprocessing, Localization, Tracking, Separation and Postprocessing.

To make use of the ManyEars software, a computer, a sound card and microphones are required. ManyEars can be used with commercially available sound cards and microphones. However, commercial sound cards present limitations when used for embedded robotic applications: they can be expensive and have functionalities which are not required for robot audition. They also require significant amount of power and size. For these reasons, the authors introduce a customized microphone board and sound card available as an open hardware solution that can be used on your robot and interfaced with the software package. The board uses an array of microphones, instead of only one or two, thereby allowing a robot to localize, track, and separate multiple sound sources.

The framework is demonstrated using a microphone array on the IRL-1 robot. The placement of the microphones is marked by red circles. Results show that the robot is able to track two human speakers producing uninterrupted speech sequences, even when they are moving, and crossing paths. For videos of the IRL-1, check out the lab’s YouTube Channel.

| More

Tracking 3D objects in real-time using active stereo vision - article


Real-time visuomotor update of an active binocular head
Michael Sapienza, Miles Hansard, Radu Horaud

Humans have the ability to track objects by turning their head to and gazing at areas of interest. Integrating images from both eyes provides depth information that allows us to represent 3D objects. Such feats could prove useful in robotic systems with similar vision functionalities. POPEYE, shown in the video below, is able to independently move its head and two cameras used for stereo vision.

To perform 3D reconstruction of object features, the robot needs to know the spatial relationship between its two cameras. For this purpose, Sapienza et al. calibrate the robot vision system before the experiment by placing cards with known patterns in the environment and systematically moving the camera motors to learn how these motor changes impact the images captured. After calibration, and thanks to some math (homography-based method), the robot is able to measure how much its motors have moved and relate that to changes in the image features. Measuring motor changes is very fast, allowing for real-time 3D tracking.

Results show that the robot is able to keep track of a human face while performing 3D reconstruction. In the future, the authors hope to add zooming functionalities to their method.

| More

Related posts


Using 3D snapshots to control a small helicopter - article


Design of a 3D snapshot based visual flight control system using a single camera in hover
Matthew A. Garratt, Andrew J. Lambert, Hamid Teimoori

To control a flying robot, you usually need to know the attitude of the robot (roll, pitch, yaw), where it is in the horizontal plane (x,y), and how high it is from the ground (z). While attitude measurements are provided by inertial sensors on board the robot, most flying robots rely on GPS and additional range sensors such as ultra-sound sensors, lasers or radars to determine their position and altitude. GPS signal however is not always available in cluttered environments and can be jammed. Additional sensors increase the weight that needs to be carried by the robot. Instead Garratt et al. propose to replace position sensors with a single small, low cost camera.

By comparing a snapshot taken from a downward pointing camera and a reference snapshot taken at an earlier time, the robot is able to calculate its displacement in the horizontal plane. The loom of the image is used to calculate the change in altitude. Image loom corresponds to image expansion or contraction as can be seen in the images below. By reacting to image displacements, the robot is able to control its position.

Grass as seen from altitudes of 0.25 m, 0.5 m, 1.0 m and 2.0 m (from left to right).

Using this strategy, the researchers were able to show in simulation that a helicopter could perform take-off, hover and the transition from low speed forward flight to hover. The ability to track horizontal and vertical displacements using 3D snapshots from a single camera was then confirmed in reality using a Vario XLC gas-turbine helicopter.

In the future, the authors intend to further test the 3D snapshot control strategy in flight using their Vario XLC helicopter before moving to smaller platforms such as an Asctec Pelican quadrotor. Additional challenges include taking into account the shadow of the robot, which might change position from snapshot to snapshot.

| More

Related posts


Explosive motions - article


Optimal variable stiffness control: formulation and application to explosive movement tasks
David Braun, Matthew Howard, Sethu Vijayakumar

Throwing, hitting, jumping or kicking are often referred to as explosive movements since they require the sudden release of large amounts of energy to be successful. Instead of using large and powerful motors to achieve such movements, researchers are turning to compliant actuators with elastic components capable of passively storing and releasing energy. Varying the stiffness of the actuator can be interesting to go from highly compliant actuators that are safe for human-robot interactions to stiffer actuators that are optimized for the task at hand. Exploring how stiffness impacts task performance is highly complex and is usually done through trial and error.

Instead, Braun et al. propose a framework that optimizes the control of actuator stiffness and torque automatically. Demonstrations are performed using a robot arm in simulation and reality on a ball throwing task (see video below). Interestingly, controlling the torque and stiffness independently leads to better performance than systems where stiffness can not be independently controlled.

Currently, the authors are implementing the proposed framework on anthropomorphic variable stiffness devices with many degrees of freedom, such as the DLR Hand-Arm System. This work provides a blueprint for achieving optimal control in the next generation of robotic devices where variable stiffness actuation is likely to play a dominant role.

| More

Related posts


Ingredients for autonomous construction - article


Autonomous construction using scarce resources in unknown environments Ingredients for an intelligent robotic interaction with the physical world
Stéphane Magnenat, Roland Philippsen, Francesco Mondada

Most research in robotics focuses on a specific problem: building better hardware, implementing new algorithms, or demonstrating a new task. Combining all these state-of-the-art ingredients into a single system is the key to making autonomous robots capable of performing useful work in realistic environments. With this in mind, Stéphane Magnenat walks us through all the steps needed to perform autonomous construction using the marXbot in the video below. To make the task challenging, the building blocks from which robots build towers are distributed throughout the environment, which is riddled with ditches that can only be overcome by using these same building blocks as bridges. Because there are few building blocks, the robot has to figure out how to move the blocks in an near-to-optimal way so that it can navigate the environment while still building the tower. Furthermore, the robot does not have any information about its environment beforehand and can only use limited computational resources, as is often the case in realistic robot scenarios.

Solving this challenge requires an integrated system architecture (see figure below) that leverages modern algorithms and representations. The architecture is implemented using ASEBA, which is an open-source control architecture for microcontrollers. The low-level implements reactive behaviors such as avoiding obstacles and ditches or grasping objects. The high-level instead takes care of mapping the environment (using a version of FastSLAM), path-planning and reasoning.

The authors hope that such an integrated approach could help shed light on the capabilities required for intelligent physical interaction with the real world.

| More

Learning acrobatic maneuvers for quadrocopters - article


Adaptive fast open-loop maneuvers for quadrocopters
Sergei Lupashin, Raffaello D'Andrea

Have you ever seen those videos of quadrocopters performing acrobatic maneuvers?

The latest paper on the Autonomous Robots website presents a simple method to make your robot achieve adaptive fast open-loop maneuvers, whether it’s performing multiple flips or fast translation motions. The method is thought to be straightforward to implement and understand, and general enough that it could be applied to problems outside of aerial acrobatics.

Before the experiment, an engineer with knowledge of the problem defines a maneuver as an initial state, a desired final state, and a parameterized control function responsible for producing the maneuver. A model of the robot motion is used to initialize the parameters of this control function. Because models are never perfect, the parameters then need to be refined during experiments. The error between the robot’s desired state and its achieved state after each maneuver is used to iteratively correct parameter values. More details can be found in the figure below or in the paper.

Method to achieve adaptive fast open-loop maneuver. p represents the parameters to be adapted, C is a first-order correction matrix, γ is a correction step size, and e is a vector of error measurements. (1) The user defines a motion in terms of initial and desired final states and a parameterized input function. (2) A first-principles continuous-time model is used to find nominal parameters p0 and C. (3) The motion is performed on the physical vehicle, (4) the error is measured and (5) a correction is applied to the parameters. The process is then repeated.

Experiments were performed in the ETH Flying Machine Arena which is equipped with an 8-camera motion capture system providing robot position and rotation measurements used for parametric learning.

| More

iCub drums and crawls using bio-inspired control - article


Toward simple control for complex, autonomous robotic applications: combining discrete and rhythmic motor primitives
Sarah Degallier, Ludovic Righetti, Sebastien Gay, Auke Ijspeert

Ever see a lizard effortlessly run up a wall?

Like most vertebrates, lizards are able to quickly adapt to new environments in a robust way thanks to a special type of movement generator. The idea is that a high-level planner (the brain) is responsible for determining the key characteristics of a movement such as the position that needs to be reached by a limb or the amplitude and frequency with which the limbs should perform rhythmic motions. These high-level commands then serve as an input to motion primitives responsible for activating muscles in the correct sequence. Motion primitives are typically organized at the spinal level through neural networks called central pattern generators (CPGs).

This control architecture has many advantages for robotics. First, once the motion primitives are designed, only high-level commands are required to control the entire motion of the robot. Therefor, instead of planning the positions of all joints, the motion planner only needs to issue high-level goals such as “reach there” or “move your arm rhythmically with this amplitude and this frequency”. This greatly reduces the complexity of planning motions for robots with many degrees of freedom. Furthermore, CPGs are very fast, have low computational cost and can be modulated by sensory feedback in order to obtain adaptive behaviors.

Using this control architecture, Degallier et al. were able to turn the iCub humanoid seen in the video below into an on-demand drummer. Random users at a robotics conference were able to change on-line a score that the iCub was playing or test how well it could adapt when its drums were moved. To show the generality of their approach, they then applied the same architecture to make the iCub crawl and reach for objects. Although one behaviour was rhythmic (crawling) and the other discrete (reaching), the robot was easily able to switch between the two.

| More

Related posts


Cooperative modular satellites - article


Cooperative control of modular space robots
Chiara Toglia, Fred Kennedy, Steven Dubowsky

In Modular Space Robotics, modules self-assemble while in orbit to create larger satellites for specific missions. Modular satellites have the potential to reduce mission costs (small satellites are cheaper to launch), increase reliability, and enable on-orbit repair and refueling. Each of the modules has its load of sensors, fuel and attitude control actuators (thrusters). Assembled modules therefore have redundant sensor and actuation capabilities. By fusing sensor data, the modular satellites can follow its trajectory more precisely and smart thruster activation can help save fuel.

The challenge is to figure out how to control such a self-assembled robot to minimize fuel consumption while balancing fuel distribution and improve trajectory following. To this end, Toglia et al. propose a cooperative controller where one of the modules, with information about the configuration of all other modules, is responsible for computing an optimal control schema. An extended Kalman-Bucy Filter is used to implement sensor fusion.

The cooperative controller was compared to an independent controller where each module attempts to follow its own trajectory while minimizing its own fuel usage and trajectory errors. Results from simulation and reality show that the cooperative controller can save significant amounts of fuel, up to 43% in one experiment, while making the trajectories more precise.

Experiments in reality were performed with two satellites using the MIT Field and Space Robotics Laboratory Free-Flying Space Robot Test Bed shown below.

| More

Related posts