News & Events

12.01.2022

TRS - Prof. Sven Koenig (University of Southern California) - Recent Advances in Multi-Agent Path Finding

When: 19.1.2022 at 15:00

Where: zoom

Abstract: The coordination of robots and other agents becomes more and more important for industry. For example, on the order of one thousand robots already navigate autonomously in Amazon fulfilment centers to move inventory pods all the way from their storage locations to the picking stations that need the products they store (and vice versa). Optimal and even some approximately optimal path planning for these robots is NP-hard, yet one must find high-quality collision-free paths for them in real-time. Algorithms for such multi-agent path-finding problems have been studied in robotics and theoretical computer science for a longer time but are insufficient since they are either fast but of insufficient solution quality or of good solution quality but too slow. In this talk, I will discuss different variants of multi-agent path finding problems, cool ideas for both solving them and executing the resulting plans robustly, and several of their applications, including warehousing, manufacturing, and autonomous driving. I will also discuss how three Ph.D. students from my research group and one Ph.D. student from a collaborating research group at Monash University used multi-agent path-finding technology to win the NeurIPS-20 Flatland train scheduling competition. Our research on this topic has been funded by both NSF and Amazon Robotics.

9.01.2022

TASP MSc Seminar - Shir Kozlovsky - Learning Admittance Control for Contact-Rich Assembly Skills

Work towards MSc degree under the supervision of Prof. Miriam Zacksenhouse

When: 25.1.2022 at 14:00

Where: zoom

Abstract: Over the years, industrial robots have been employed increasingly to perform advanced production and high-precision tasks in various industries. However, further integration of industrial robots is hampered by their lack of flexibility, adaptability, and decision-making capabilities compared to human operators. Contact-rich assembly tasks are incredibly challenging since even small uncertainties in the location of the parts can cause large reaction forces and prevent the robot from performing the task successfully. Large industries overcome this problem by designing precise assembly lines. However, this approach is very costly and not economical for small and medium industries where production volumes are not very large. An alternative approach is to use admittance control to handle location uncertainties by endowing the robot’s end-effector with proper stiffness, damping, and inertia properties to correct its position in response to the reaction forces. The power of admittance control for performing assembly tasks motivated a number of researchers to develop machine learning tools to tune the admittance parameters explicitly. However, learning was limited to diagonal admittance matrices and thus required learning the trajectory too. My thesis is based on the understanding that non-diagonal elements in the admittance matrices are critical for correcting the robot’s motion during assembly tasks. In particular, those elements can cause the robot to perform proper translation movements in response to reaction torques due to initial misalignments. In my Thesis, I used Reinforcement learning, and in particular, proximal policy optimization (PPO), to learn the parameters of non-diagonal admittance matrices for peg-in-hole assembly tasks. Learning was performed in simulations and tested in both simulations and on a real robot (UR5e). Results demonstrate that the learned admittance control is robust to uncertainties and can generalize to different locations and sizes. Interestingly, I show that the learned admittance matrices are space invariant. Most importantly, the learned policy was demonstrated successfully on a physical robot, for both rigid pegs and semi-rigid wires, without any retraining. This research is funded by Israel Innovation Authorities (IIA) as part of the ART (Assembly by Robotic Technology) academia-industry cooperation (“MAGNET”) aimed to develop generic tools for increasing robotic integration in the industry, especially for small to medium volumes.

 

4.01.2022

TASP MSc Seminar - Doron Pinsky - T*ε-Bounded Sub-optimal Efficient Motion Planning for Minimum-Time Planar Curvature-Constrained Systems

Work towards MSc degree under the supervision of Dr. Oren Salzman

When: 20.1.2022 at 11:00

Where: zoom

Abstract: We consider the problem of finding collision-free paths for curvature-constrained systems in the presence of obstacles while minimizing execution time. Specifically, we focus on the setting where a planar system can travel at some range of speeds with unbounded acceleration. This setting can model many systems, such as fixed-wing drones. Unfortunately, planning for such systems might require evaluating many (local) time-optimal transitions connecting two close-by configurations, which is computationally expensive. Existing methods either pre-compute all such transitions in a preprocessing stage or use heuristics to speed up the search, thus foregoing any guarantees on solution quality. Our key insight is that computing all the time-optimal transitions is both (i) computationally expensive and (ii) unnecessary for many problem instances. We show that by finding bounded-suboptimal solutions (solutions whose cost is bounded by 1+ε times the cost of the optimal solution for any user-provided ε) and not time-optimal solutions, one can dramatically reduce the number of time-optimal transitions used. We demonstrate using empirical evaluation that our planning framework can reduce the runtime by several orders of magnitude compared to the state-of-the-art while still providing guarantees on the quality of the solution.

2.01.2022

TRS - Dr. Sarah Keren (CS, Technion) - Better Environments for Better AI

When: 5.1.2022 at 15:00

Where: Zoom

Abstract: Most AI research focuses exclusively on the AI agent itself, i.e., given some input, what are the improvements to the agent’s reasoning that will yield the best possible output? In my research, I take a novel approach to increasing the capabilities of AI agents via the use of AI to design the environments in which they are intended to act. My methods identify the inherent capabilities and limitations of AI agents and find the best way to modify their environment in order to maximize performance. I will describe research projects that vary in their design objective, in the AI methodologies that are applied for finding optimal designs, and in the real-world applications to which they correspond. One example is Goal Recognition Design (GRD), which seeks to modify environments to allow an observing agent to infer the goals of acting agents as soon as possible. A second is Helpful Information Shaping (HIS), which seeks to find the minimal information to reveal to a partially-informed robot in order to guarantee the robot’s goal can be achieved. I will also show how HIS can be used in a market of information, where robots can trade their knowledge about the environment and achieve an effective communication that allows them to jointly maximize their performance. The third, Design for Collaboration (DFC), considers an environment with multiple self-interested reinforcement learning agents and seeks ways to encourage them to collaborate effectively. Throughout the talk, I will discuss how the different frameworks fit within my overarching objective of using AI to promote effective multi-agent collaboration and to enhance the way robots and machines interact with humans.

You can watch the seminar here

2.01.2022

TRS - Prof. Shai Revzen (University of Michigan) - Learning locomotion the easy way

When: 12.1.2022 at 15:00

Where: zoom

Also: Dan Kahn Building, room 217, Technion

Abstract: It seems that animals are very good at learning how to move, and how to recover the ability to move after injury. Roboticists have attempted to imbue the same capabilities in robots with only moderate success. Through pursuing a deeper understanding of the underlying mathematics and physics of locomotion, I present the idea of using limit cycle oscillators as the key mathematical object to consider. Using tools developed for modeling the oscillators that appear in biological locomotion and combining them with insights from geometric mechanics, we created robots that can learn how to move with an optimization that lasts only a few dozens of cycles. The talk will present these ideas at a high level, primarily focusing on experimental results from animals, robots, and simulated robots.

You can watch the seminar here

2.01.2022

TASP MSc Seminar - Ohad Shelly - Hypotheses Disambiguation in Retrospective

Work towards MSc degree under the supervision of Prof. Vadim Indelman

When: 9.2.2022 at 11:00

Where: zoom

Abstract: Robust perception is a key required capability in robotics and AI when dealing with scenarios and environments that exhibit some level of ambiguity and perceptual aliasing. In this work, we consider such a setting and contribute a framework that enables to update probabilities of externally-defined data association hypotheses from some past time with new information that has been accumulated until the current time. In particular, we show appropriately updating probabilities of past hypotheses within this smoothing perspective potentially enables to disambiguate these hypotheses even when there is no full disambiguation of the mixture distribution at the current time. Further, we develop an incremental algorithm that re-uses hypotheses’ weight calculations from previous steps, thereby reducing computational complexity. In addition, we show how our approach can be used to enhance current-time hypotheses pruning, by discarding corresponding branches in the hypotheses tree. We demonstrate our approach in simulation, considering an extremely aliased environment setting.

 

12.12.2021

TRS - Prof. Luca Carlone (MIT) - From SLAM to Real-time Scene Understanding: 3D Dynamic Scene Graphs and Certifiable Perception Algorithms

When: 22.12.2021 at 15:00

Where: Zoom

Abstract: Spatial perception —the robot’s ability to sense and understand the surrounding environment— is a key enabler for autonomous systems operating in complex environments, including self-driving cars and unmanned aerial vehicles. Recent advances in perception algorithms and systems have enabled robots to detect objects and create large-scale maps of an unknown environment, which are crucial capabilities for navigation, manipulation, and human-robot interaction. Despite these advances, researchers and practitioners are well aware of the brittleness of existing perception systems, and a large gap still separates robot and human perception. This talk discusses two efforts targeted at bridging this gap. The first effort targets high-level understanding. While humans are able to quickly grasp both geometric, semantic, and physical aspects of a scene, high-level scene understanding remains a challenge for robotics. I present our work on real-time metric-semantic understanding and 3D Dynamic Scene Graphs. I introduce the first generation of Spatial Perception Engines, that extend the traditional notions of mapping and SLAM, and allow a robot to build a “mental model” of the environment, including spatial concepts (e.g., humans, objects, rooms, buildings) and their relations at multiple levels of abstraction. The second effort focuses on robustness. I present recent advances in the design of certifiable perception algorithms that are robust to extreme amounts of noise and outliers and afford performance guarantees. I present fast certifiable algorithms for object pose estimation and showcase an application to vehicle pose and shape estimation in self-driving scenarios.

You can watch the seminar here

 

 

1.12.2021

TRS - Dr. Tal Nir (principal computer vision engineer in Asensus surgical) - Robotic minimally invasive surgery - current and future trends

When: 8.12.2021 at 15:00

Where: Zoom

Abstract:  Minimally invasive surgery has become the best practice for many surgical procedures due to its fast recovery and minimal damage to the patient, robotic surgery is also becoming more popular and offers the surgeon better ergonomics, stability of the line of sight, and fine movements. In this lecture, we will review recent technologies employed in robotic surgery for reducing the surgeon’s intensive labor and workload, image processing capabilities allow for autonomous movements of the camera during surgery, and 3D reconstruction for fast physical measurements. We will see how machine learning and scene understanding can further improve the surgical procedure.