Work towards PhD degree under the supervision of Assoc. Prof. Vadim Indelman
When: 21.12.2020 at 13:30
Abstract: Real life scenarios in Autonomous Systems (AS) and Artificial Intelligence (AI) involve agent(s) that are expected to reliably and efficiently operate online under different sources of uncertainty, often with limited knowledge regarding the environment. These settings necessitate probabilistic reasoning regarding high dimensional problem-specific states. Attaining such levels of autonomy involves two key processes, inference and decision making under uncertainty. The former maintains a belief regarding the high-dimensional state given available information thus far, while the latter, also often referred to as belief space planning (BSP), is entrusted with determining the next best action(s). However, as these problems are computationally expensive, simplifying assumptions or process streamlining are required in order to provide with online or real-time performance. In recent years the similarities between inference and control triggered much work, from developing unified computational frameworks to pondering about the duality between the two. In spite of the aforementioned efforts, inference and control, as well as inference and belief space planning are still treated as two separate processes.
We present in this talk the “Joint Inference and Belief Space Planning” (JIP), a novel paradigm that fully utilizes the similarities between probabilistic inference and BSP, thus enabling to re-use computationally expensive calculations. Through the symbiotic relation enabled by JIP we developed new approaches for inference – Ru-Use Belief Inference (RUBI), and for decision making under uncertainty – Incremental eXpectation BSP (iX-BSP). In RUBI we update inference with a precursory planning stage which can be considered as a deviation from conventional Bayesian inference.
Rather than updating the belief from the previous time instant with new incoming information (e.g.~measurements), RUBI exploits the fact that similar calculations are already performed within planning in order to appropriately update the belief in inference far more efficiently while preserving accuracy. The iX-BSP approach exploits calculations performed as part of previous planning sessions to efficiently solve a new planning session while accounting for the data that became available since then, which is particularly important while operating in uncertain, potentially dynamically changing, environments.
We demonstrate our novel paradigms on both simulation and real-world data considering active visual SLAM experiments, while benchmarking it against the current top of the line. We show that our paradigms save valuable computation time without introducing simplifying assumptions or affecting accuracy, thus bringing these advanced capabilities more feasible for an online setting.
Work towards MSc degree under the supervision of Assoc. Prof. Sagi Filin and Dr. Iztik Klein
When: 1.12.2020 at 14:30
Abstract: Simultaneous localization and mapping for a group of mobile platforms has seen growing interest in recent years. It is common, in such setups, to equip each platform with a high-end inertial measurement unit (IMU), an imaging system, and as long as outdoor applications are concerned, a global navigation satellite system receiver. In most cases some form of relaxation is introduced e.g., planner motion assumptions, ideal IMUs, and a static and near-sight scene is common. Yet, many mobile platforms do not follow these assumptions, particularly with a six degrees of freedom (DOF) dynamics of airborne platforms, but also for vehicles operating on roads with bumps, pedestrians navigating with low-cost mobile phone sensors, or maritime platforms. In this research, we propose to improve the navigation solution, and the quality of the consequent mapping by reflecting actual operational conditions, better estimation of the sensor errors, and introduction stochastic constraints on the relative pose of the platforms. To make our solution viable and affordable, we study use of low-cost sensors to facilitate such constraints and manners by which they can be integrated. We propose a model that manages to improve the accuracy of the navigation solution, to enhance the numerical stability and robustness of the estimator, and to improve the accuracy of the forward projection. We show that our model allows for a greater flexibility in the mission planning and facilitates shorter time on site without compromising quality concerns. Our results show that high level of accuracy is maintained even when the scale is increased, allowing to achieve greater coverage and shorter operation time which in turn reduces costs. This is demonstrated by both simulations and real-world experiments on a group of platforms equipped with low-cost off-the-shelf sensors. As we show, our model is general and flexible, allowing to accommodate with different types of sensors and platforms.
Work towards MSc degree under the supervision of Prof. Pini Gurfil
When: 9.11.2020 at 16:00
Abstract: Nanosatellites, most predominantly CubeSats, usually carry no propulsion system, yet their usage is important both from the science and from the budget points of view. To increase the nano-satellite functionality, as well as to allow orbit control in the presence of orbit injection errors, an ejection of mass from a reaction wheel is proposed. Utilizing the reaction wheel kinetic energy, a propellant mass is released from the specialized wheel’s outer circumference using a short application of an electric current. Such a propulsion mechanism can allow impulsive orbit control without using traditional propulsion systems. Dynamical model of the satellite, modified reaction wheel and the ejected mass will be presented. A methodology for constructing a model of a variable-mass reaction wheel designed according to given specifications of a desired velocity change and satellite attitude stability will be given. An example of the impulse generation for small orbit corrections necessary to maintain a formation after orbit injection will be presented. The pros and cons of the proposed method will be discussed.
Work towards MSc degree under the supervision of Prof. Amir Degani
When: 15.10.2020 at 10:00
Abstract: Data sensing and information processing are two key technological abilities that are being integrated into construction progress monitoring and evaluation. We propose an automated system of a projection and scanning technology to provide real-time information and feedback regarding the quality and accuracy of worker performance. Real time automated error detection enables better, more efficient planning for the entire project. The goal is to achieve proper quality after a single iteration, with fully automated inspection, and no additional rework. As a proof-of-concept, we demonstrate the system using a wall plastering operation to exemplify real time sensing, processing, and visual feedback. The chosen application is difficult to measure in conventional means, and errors are difficult to detect. Our system monitors the progress of the procedure, using periodic laser scanning, and obtains data as a colored point cloud. The system then evaluates the surface flatness and projects corrections directly onto the surface itself, after optimizing with respect to tolerances and as-planned models. We demonstrate the concept in an experimental setup using a 3D laser scanner and an angled adjustable projector. The results show high precision detection of wall flatness deviations, of up to 2 mm accuracy.
You can see the seminar here
Congratulations to our PhD student Ayal Taitler for winning the VATAT Prize for Interdisciplinary Research Combining Data Science and a Different Discipline. The award was distributed through the support of the Machine Learning & Intelligent Systems (MLIS) interdisciplinary research program.
Work towards MSc degree under the supervision of Prof. Reuven Katz and Dr. Itzik Klein
When: 9.9.2020 at 10:00
Abstract: Inertial navigation systems (INS) provides the platform’s position, velocity and attitude. To that end, initial conditions are required before system operation. While initial position and velocity are provided by external means, the initial attitude can be determined using the system inertial sensors in a process known as coarse alignment. For low-cost inertial sensors, only the accelerometers readings are processed to estimate the initial roll and pitch angles. The accuracy of the coarse alignment procedure is vitally important for the navigation solution accuracy, particularly for pure-inertial scenarios, due to the navigation solution drift accumulating over time.
In this research, we propose using machine learning (ML) approaches, instead of traditional ones, to conduct the coarse alignment procedure in stationary conditions. A new methodology for the alignment process is proposed, based on ML algorithms such as random forest and some more advanced boosting methods like gradient tree XGBoost. Results from a simulated alignment of stationary INS scenarios are presented accompanied by field experiments results. ML results are compared with the traditional coarse alignment methods in terms of time to convergence and performance. Results obtained using the proposed approach shows significant improvement of the accuracy and time required for the alignment process.
Work towards PhD degree under the supervision of Prof. Vadim Indelman
When: 31.8.2020 at 11:30
Abstract: Probabilistic inference, such as density (ratio) estimation, is a fundamental and highly important problem that needs to be solved in many different domains including robotics and computer science. Recently, a lot of research was done to solve it by producing various objective functions optimized over neural network (NN) models. Such Deep Learning (DL) based approaches include unnormalized and energy models, as well as critics of Generative Adversarial Networks, where DL has shown top approximation performance. In this research we contribute a novel algorithm family, which generalizes all above, and allows us to infer different statistical modalities (e.g. data likelihood and ratio between densities) from data samples. The proposed unsupervised technique, named Probabilistic Surface Optimization (PSO), views a model as a flexible surface which can be pushed according to loss-specific virtual stochastic forces, where a dynamical equilibrium is achieved when the pointwise forces on the surface become equal. Concretely, the surface is pushed up and down at points sampled from two different distributions, with overall up and down forces becoming functions of these two distribution densities and of force intensity magnitudes defined by the loss of a particular PSO instance. Upon convergence, the force equilibrium associated with the Euler-Lagrange equation of the loss enforces an optimized model to be equal to various statistical functions, such as data density, depending on the used magnitude functions. Furthermore, this dynamical-statistical equilibrium is extremely intuitive and useful, providing many implications and possible usages in probabilistic inference. We connect PSO to numerous existing statistical works which are also PSO instances, and derive new PSO-based inference methods as demonstration of PSO exceptional usability. Additionally, we investigate the impact of Neural Tangent Kernel (NTK) on PSO equilibrium. Our study of NTK dynamics during the learning process emphasizes the importance of the model kernel adaptation to the specific target function for a good learning approximation.
You can see the seminar here
Work towards MSc degree under the supervision of Dr. Erez Karpas and Dr. Tamir Hazan
When: 2.9.2020 at 9:00
A hallmark of intelligence is the ability to deduce general principles from examples, which are correct beyond the range of those observed. Generalized Planning deals with finding such principles for a class of planning problems, so that principles discovered using small instances of a domain can be used to solve much larger instances of the same domain. In this work we study the use of Deep Reinforcement Learning and Graph Neural Networks to learn such generalized policies and demonstrate that they can generalize to instances that are orders of magnitude larger than those they were trained on.
You can see the seminar here