When: 3.11.2021 at 15:30
Abstract: Factor graphs have become a popular tool for modeling robot perception problems. Not only can they model the bipartite relationship between sensor measurements and variables of interest for inference, but they have also been instrumental in devising novel inference algorithms that exploit the spatial and temporal structure inherent in these problems. I will start with a brief history of these inference algorithms and relevant applications. I will then discuss open challenges in particular related to robustness from the inference perspective and discuss some recent steps towards more robust perception algorithms.
Work towards PhD degree under the supervision of Prof. Erez Karps and Prof. Emeritus Per-Olof Gutman
When: 4.11.2021 at 11:00
Abstract: Robots operate in the real world, which is hybrid, i.e. comprised of continuous and discrete properties, uncertain, constrained, non-linear, and often cooperation or at least synchronization with other agents, human and robotic is required. Moreover, usually for an agent to reach its desired goal, a long time horizon is required, which accumulates errors and makes it dimensionally impossible to discretize the problem. Each of these separately poses a major challenge for autonomous behavior. Robots must be able to come up with long-term plans in the face of these challenges in order to reach autonomy. Various communities have addressed these problems; e.g., the control, automated planning, machine learning, and robotics communities, each with its own merits and weaknesses. In this work, we attempt to bridge the gap between these communities and present a unified method leveraging accurate short-term control strategy, long-term abstract planning methods, and deep neural networks tailored on the fly. Our method can handle long, continuous horizons, allowing for concurrency and synchronization, incorporation of accurate non-linear dynamic models, while balancing between expensive accurate computations and “simple” abstract computations.
Work towards MSc degree under the supervision of Dr. Shlomi Laufer
When: 29.9.2021 at 14:00
Abstract: Medical simulators provide a controlled environment for training and assessing clinical skills. However, as an assessment platform, it requires the presence of an experienced examiner to provide performance feedback, commonly preformed using a task specific checklist. This makes the assessment process inefficient and expensive. Furthermore, this evaluation method does not provide medical practitioners the opportunity for independent training. Ideally, the process of managing the simulation should be done by a fully aware objective system, capable of recognizing and monitoring the clinical performance and to act accordingly. In our study we applied techniques from graph networks and language models to construct a fully autonomic simulation framework, based on clinical data collected from 28 medical simulations. A key finding of our work is that by analyzing physicians’ speech alone, we can successfully perform state estimation and to make predictions regarding their medical treatment planning. We propose that the fully autonomic speech-based framework for managing medical simulations constructed in this study is applicable to clinical practice. In a field where seconds can make the difference between life and death, integrating an autonomic assisting tool is of great importance.
Work towards MSc degree under the supervision of Assoc. Prof. Jack Hadad
When: 2.9.2021 at 15:00
Abstract: Autonomous vehicles traveling without considering the lane marks and utilizing all road width have an opportunity to maximize the vehicles’ performance. By taking advantage of the entire width of curvy roads and the cooperative behavior of connected autonomous vehicles, new options for path planning can be implemented while utilizing the existing infrastructure. This research focuses on path and trajectory planning for fully autonomous vehicles without considering the lane marks by a proposed controller. This cooperative controller uses the nonlinear model predictive control (NMPC) approach for dozens of autonomous vehicles in the existing road infrastructure. The controller maximizes vehicles’ progress on the road with minimal control efforts while complying with the design constraints imposed by the road geometry, distances between vehicles, and vehicle dynamics. As a result, the controller generates the longitudinal acceleration and the steering rate inputs. The controller was tested on several case study simulations. The tests were done on closed-loop tracks and straight roads with different numbers of vehicles with identical vehicles’ parameters and different vehicles’ parameters to examine the advantages of the lane-free road concept. As part of the simulations, a comparison of the lane-free concept and the “traditional” lane concept showed that the lane-free concept proved to be better in the examined case studies. In addition, lab experiments were conducted on three robots performing several case studies.
Work towards PhD degree under the supervision of Prof. Alfred M. Bruckstein
When: 12.9.2021 at 11:00
Abstract: The thesis investigates the emergent behavior of single integrator multi-agents, guided by an exogenous broadcast control. The broadcast guidance control, a velocity signal, is detected and applied by a subgroup of agents, referred to as leaders. Several linear and non-linear models are considered, with fixed, as well as varying topology, the latter caused by limited visibility. In each case, we show the impact of the broadcast control and of the subset of leaders on the asymptotic behavior of the system.
Work towards PhD degree under the supervision of Prof. Per-Olof Gutman
When: 19.7.2021 at 15:30
Where: Dan Kahn building, room # 217
Abstract: The main objective of this research is to develop a novel method for aiding human subjects acquire completely new sets of motor skills required for activities in unfamiliar or unnatural environments. The proposed study case is the free-fall stage of skydiving. At this stage aerial maneuvers are performed by changing the body posture and thus deflecting the surrounding airflow. The natural learning process is extremely slow due to unfamiliar free-fall dynamics, stress induced blocking of kinesthetic feedback, and complexity of the required movements. The key idea is to augment the learner with an automatic control system that would be able to perform the trained activity if it had direct access to the learner’s body as an actuator. The aiding system will supply the following visual cues to the learner: 1. Feedback of the current body posture; 2. The body posture that would bring the body to perform the desired maneuver; 3. Prediction of the future inertial position and orientation if the body retains its present posture. The system will enable novices to maintain stability in free-fall and perceive the unfamiliar environmental dynamics, thus accelerating the initial stages of skill acquisition. A Proof-of-Concept experiment was conducted, whereby humans controlled a virtual skydiver free-falling in a computer simulation, by the means of their bodies. This task was impossible without the aiding system, enabling all participants to complete the task at the first attempt. Computation of the visual cues required modeling of human body free-fall dynamics. A Skydiving Simulator comprising Biomechanical, Aerodynamic, and Kinematic models, dynamic equations of motion, and a virtual reality interface was developed and experimentally verified. Aerodynamic coefficients and skydiver related inputs were estimated via a modified Unscented Kalman Filter from experiments that involved execution of a large variety of free-fall maneuvers. A novel control method based on an Unscented Transform was developed for performing highly advanced maneuvers in a virtual way. The crux of the research was to convert the autonomous maneuver execution into motor learning aids. This process involved a thorough analysis of the movement repertoire of skydivers with varying level of skill. It was discovered that experienced athletes use 2-5 movement patterns for most maneuvers. Each movement pattern is a combination of body degrees-of-freedom that are activated proportionally and synchronously, as a single unit. The most significant discovery was that the dynamic characteristics of the plant, comprised by an environment and an actuated body, highly depend on the choice of movement patterns. Based on a number of study cases, we propose a novel hypothesis regarding human motor equivalence: The multiple body degrees-of-freedom are not necessarily redundant, but are needed for shaping the plant dynamics to enable performing desired maneuvers with a simple control law. Additionally, it was discovered that movement patterns may form synergies in order to achieve a desired trade-off between different dynamic characteristics, such as stability and agility. Based on these novel insights an unconventional sports technique analysis method was developed and demonstrated in a study case.
When: 21.7.2021 at 15:30
Abstract: My approach to Generalizable Autonomy posits that interactive learning across families of tasks is essential for discovering efficient representation and inference mechanisms. Arguably, a cognitive concept or a dexterous skill should be reusable across task instances to avoid constant relearning. It is insufficient to learn to “open a door”, and then have to re-learn it for a new door, or even windows & cupboards. Thus, I focus on three key questions: (1) Representational biases for embodied reasoning, (2) Causal Inference in abstract sequential domains, and (3) Interactive Policy Learning under uncertainty. In this talk, I will demonstrate the need for structured biases in modern RL algorithms in the context of robotics. This will span state, actions, learning mechanisms, and network architectures. Secondly, we will talk about the discovery of latent causal structure in dynamics for planning. Finally, I will demonstrate how large-scale data generation combined with insights from structure learning can enable sample efficient algorithms for practical systems. In this talk, I will focus mainly on manipulation, but my work has been applied to surgical robotics and legged locomotion as well.
When: 2.6.2021 at 15:30
Abstract: Designing robots for human interaction is a multifaceted challenge involving the robot’s intelligent behavior, physical form, mechanical structure, and interaction schema. Our lab develops and studies human-centered robots using a combination of methods from AI, Design, and Human-Computer Interaction. This talk focuses on three recent projects, two concerning the design of a new robot, and one that tackles designing robots that help human designers.
You can see the seminar Here