News & Events


TRS - Prof. Animesh Garg (University of Toronto, Vector Institute, Nvidia) - Building Blocks of Generalizable Autonomy

When: 16.6.2021 at 15:30

Where: zoom

Abstract: My approach to Generalizable Autonomy posits that interactive learning across families of tasks is essential for discovering efficient representation and inference mechanisms. Arguably, a cognitive concept or a dexterous skill should be reusable across task instances to avoid constant relearning. It is insufficient to learn to “open a door”, and then have to re-learn it for a new door, or even windows & cupboards. Thus, I focus on three key questions:  (1) Representational biases for embodied reasoning, (2) Causal Inference in abstract sequential domains,  and (3) Interactive Policy Learning under uncertainty. In this talk, I will demonstrate the need for structured biases in modern RL algorithms in the context of robotics. This will span state, actions, learning mechanisms, and network architectures. Secondly,  we will talk about the discovery of latent causal structure in dynamics for planning. Finally, I will demonstrate how large-scale data generation combined with insights from structure learning can enable sample efficient algorithms for practical systems. In this talk, I will focus mainly on manipulation, but my work has been applied to surgical robotics and legged locomotion as well.


TRS - Prof. Guy Hoffman (Cornell University) - Designing Robots and Designing with Robots

When: 2.6.2021 at 15:30

Where: zoom

Abstract: Designing robots for human interaction is a multifaceted challenge involving the robot’s intelligent behavior, physical form, mechanical structure, and interaction schema. Our lab develops and studies human-centered robots using a combination of methods from AI, Design, and Human-Computer Interaction.  This talk focuses on three recent projects, two concerning the design of a new robot, and one that tackles designing robots that help human designers.

You can see the seminar Here


TASP PhD Seminar - Khen Elimelech - "Efficient Decision Making Under Uncertainty in High-Dimensional State Spaces"

Work towards PhD degree under the supervision of  Prof. Vadim Indelman

When: 24.5.2021 at 15:30

Where: Zoom

Abstract: The fundamental goal of artificial intelligence (AI) research is to allow agents and robots to autonomously plan and execute their actions. To achieve reliable and robust performance, these agents must account for real-world uncertainty. There are multiple possible sources for such uncertainty, including dynamic environments, in which unpredictable events might occur; noisy or limited sensor measurements, such as an imprecise GPS signal; and inaccurate delivery of actions. Practically, these settings require reasoning over high-dimensional probabilistic states, known as “beliefs”, representing the knowledge of the agent on the world. To decide what would be the optimal and “safest” course of action, the agent should probabilistically predict the future development of its belief, considering a set of multiple candidate actions or policies. However, such belief propagation over long horizons requires computationally demanding optimization of numerous inter-connected variables. Thus, real-time decision making under uncertainty proves to be a challenge, especially when having a limited processing power, which is often the case with mobile robots. Hence, in our work, we focused on developing methods to reduce the computational complexity of this decision making problem, while providing formal optimality guarantees. In this talk, we will present several of the novel techniques we have developed: First, we will prove and demonstrate that relying on a sparse approximation of the agent’s belief, which is represented with a high-dimensional matrix in the Gaussian case, can significantly reduce the complexity of belief propagation, while still maintaining optimality (“action consistency”); such sparsification is only utilized in the planning stage, and thus does not compromise the quality nor efficiency of the state estimation. We will then show that when the action domain is large, using bounded approximations, we can easily eliminate unfit actions, while sparing the need to exactly evaluate all the candidate actions. Finally, we will introduce PIVOT: Predictive Incremental Variable Ordering Tactic. Uniquely to this approach, we optimize the representation of the present belief (matrix) based on the predicted state development in the future, and not based on the current knowledge; this technique is not only able to reduce the complexity of decision making, but also to reduce the cost of “loop closing” when re-observing scenes during action execution. We will demonstrate the benefits of these methods in the solution of autonomous navigation and active Simultaneous Localization and Mapping (SLAM) problems, where we manage to significantly reduce computation time, without compromising the quality-of-solution.


TRS - Prof. Peter Stone (University of Texas at Austin) - Machine Learning for Robot Locomotion: Grounded Simulation Learning and Adaptive Planner Parameter Learning

When: 12.5.2021 at 15:30

Where: zoom

Abstract: Robust locomotion is one of the most fundamental requirements for autonomous mobile robots. With the widespread deployment of robots in factories, warehouses, and homes, it is tempting to think that locomotion is a solved problem. However for certain robot morphologies (e.g. humanoids) and environmental conditions (e.g. narrow passages), significant challenges remain. This talk begins by introducing Grounded Simulation Learning as a way to bridge the so-called reality gap between simulators and the real world in order to enable transfer learning from simulation to a real robot (sim-to-real). It then introduces Adaptive Planner Parameter Learning as a way of leveraging human input (learning from demonstration) towards making existing robot motion planners more robust, without losing their safety properties. Grounded Simulation Learning has led to the fastest known stable walk on a widely used humanoid robot, and Adaptive Planner Parameter Learning has led to efficient learning of robust navigation policies in highly constrained spaces.

You can see the seminar Here


TRS - Dr. David Rosen (Massachusetts Institute of Technology) - Certifiably Correct Machine Perception

When: 5.5.2021 at 15:30

Where: zoom

Abstract: Machine perception is the process of constructing a model of an embodied agent’s environment from raw sensory data. This capability is essential for mobile robots, supporting such core functions as planning, navigation, and control. However, many fundamental machine perception tasks (e.g. navigation) require the solution of a high-dimensional nonconvex estimation problem, which is computationally intractable in general. This computational complexity presents a serious obstacle to the development of practical and reliable machine perception methods suitable for real-time robotics applications. To address this challenge, in this talk we present a novel class of certifiably correct algorithms that are capable of efficiently solving generally-intractable robotic perception problems in many practical settings. In brief, these methods are based upon a (convex) semidefinite relaxation whose minimizer we prove provides an exact (globally optimal) solution to the original estimation problem under moderate measurement noise; moreover, whenever exactness obtains, it is possible to verify this fact a posteriori, thereby certifying the correctness (global optimality) of the recovered estimate. We illustrate the design of this class of methods using the fundamental problem of robotic mapping as a running example, culminating in the presentation of SE-Sync, the first practical method provably capable of recovering correct (globally optimal) map estimates. Finally, we conclude with a discussion of open questions and future research directions.

You can see the seminar Here



TASP PhD Seminar - Vitaly Shalumov - "Cooperative Guidance for Active Aircraft Protection"

Work towards PhD degree under the supervision of  Prof. Tal Shima

When: 5.5.2021 at 10:30

Where: zoom

Abstract: The presented research deals with an interception scenario that consists of multiple maneuvering aircrafts and missiles defending said aircrafts from multiple incoming threats. The main objective of the research is to address the inherent coupling between aircrafts’ and defenders’ guidance and defenders’ allocation in the aforementioned multi-agent scenario, by developing novel cooperative guidance and weapon – target allocation (WTA) strategies. The talk will begin with the presentation of a problem in which the interception of the attacker by the defender is uncertain, leading to possible defenders-to-attackers re-allocations mid-flight. The research yielded guidance laws designed to deliver optimal average performance over all the possible allocation decision sequences, thus compensating for the allocation uncertainty. To ensure the feasibility of each re-allocation, we extended the aforementioned guidance laws to include a re-allocation feasibility constraint. Concretely, we enforced a constraint that all the attackers are within the defender’s seeker field of view at re-allocation time. The optimal controller for the coupled uncertain – constrained problem will be presented along with its performance analysis. Finally, we will address the problem in which the WTA scheme is not fixed and is thus subject to optimization. The investigation yielded a family of linear cooperative guidance laws and a general work frame for the development of static (i.e., before launch) WTA algorithms. The presentation of the developed WTA algorithms and their performance in comparison to an optimal WTA will conclude the talk.



TRS - Prof. David Held (Robotics Institute, Carnegie Mellon University) - Perceptual Robot Learning

When: 21.4.2021 at 15:30

Where: Zoom

Abstract: Robots today are typically confined to interact with rigid, opaque objects with known object models. However, the objects in our daily lives are often non-rigid, can be transparent or reflective, and are diverse in shape and appearance. One reason for the limitations of current methods is that computer vision and robot planning are often considered separate fields. I argue that, to enhance the capabilities of robots, we should jointly design perception and planning algorithms based on the robotics task to be performed. I will show how we can develop novel perception algorithms to assist with the tasks of manipulating cloth, manipulating novel objects, and grasping transparent and reflective objects. By thinking about the downstream task and jointly developing vision and planning algorithms, we can significantly improve our progress on difficult robots tasks.

You can see the seminar  Here



TASP MSc Seminar - Michael Pukshansky - "Experimental Study of Actuation in Minimally Controlled Frusta-Based Fluid-Driven Soft Robotics"

Work towards MSc degree under the supervision of Assoc. Prof. Yizhar Or and Assoc. Prof. Amir Gat 

When: 19.5.2021 at 14:00

Where: Zoom

Abstract: Soft robotics is an emerging field of research greatly inspired by nature, which focuses on analysis and design of robots with flexible structure that can deform and change shape and dimensions continuously. Soft robots are expected to be especially useful in in man-machine interfaces, locomotion on different terrains and through narrow spaces, robotic minimally-invasive surgery and more. This work helps to simplify the hardware needed for controlling such robots. We focus on fluid-driven elastic actuators. In those actuators the actuation is achieved by controlling the pressure or flow rate at the fluid-inlets of the structure. For complicated actuation (e.g. three-dimensional movement or locomotion) usually several separate control inlets are required. In this work, we are minimizing the amount of controlling inlets by utilizing dynamic effects of viscous flow and by using multi-stable elastic structures. Experiments were conducted in order to study the behavior of Frusta – a multi-stable structures also known as “bending straw”. This structure has many different equilibrium states. By controlling the pressure of an entrapped fluid in the frusta, we show that it is possible to switch between different states in desired order. Connecting the flow between several different frusta and using a high-viscosity liquid allowed us to present a structure where three-dimensional actuation is achieved while controlling only one inlet pressure. We also present a mathematical model and numerical simulations. Several experimental systems were built in order to verify the model and to demonstrate controlled actuation between several states.

You can see the seminar Here