EPFL Student Projects

Social media and crowdsourcing for social good

The student will contribute to a multidisciplinary initiative for the use of social media and mobile crowdsourcing for social good. Several projects are available. Specific topics include:

      * Social media analytics
      * Visualization of social and crowdsourced data
      * Smartphone apps for mobile crowdsourcing

Students will be working with social computing researchers studying European and developing cities.

Contact: Prof. Daniel Gatica-Perez daniel.gatica-perez@epfl.ch

Robot-Assisted Learning for Object Recognition

Supervisor: Dr. Francois Fleuret (http://www.idiap.ch/~fleuret/)

One of the key computer vision problem solved by modern machine learning is image classification, that is to predict automatically the semantic content of an image. Standard tasks such as object recognition, face recognition, scene classification, etc. fall into that framework.

Solving this problem involves in particular learning highly invariant representations to geometric and lighting changes. This can be done through the design of “features”, or in a data-drive way from large data-set, as with deep convolutional networks [Krizhevsky et al., 2012].

The objective of this project is to study a new class of predictors able to model jointly the influence of nuisance parameters. Instead of relying on locally invariant features, we want to learn from data how the representation can be globally influenced by the said parameters, and to account for them, in a spirit similar to that of pose-indexed features [Fleuret & Geman, 2008]. This work will in particular require the creation of a large-scale data-set of images with a robot, to have fine control of the illumination and geometrical conditions.

Keywords: compute vision, object recognition, deep learning, invariant embeddings

References:

[Fleuret & Geman, 2008] F. Fleuret and D. Geman. Stationary Features and Cat Detection. Journal of Machine Learning Research (JMLR), 9:2549–2578, 2008.

[Krizhevsky et al., 2012] A. Krizhevsky, I. Sutskever, and G. Hinton, ImageNet Classification with Deep Convolutional Neural Networks. In Proceedings of the international conference on Neural Information Processing Systems (NIPS), pages 1097–1105, 2012.

Mobile interface for the generation of movements with variations in robots

The aim of the project is to extend the standard spline-based approach used in robotics to an interface allowing the user to define not only the keypoints that the robot should pass through, but also the variations allowed for each keypoint. This will be achieved by a model predictive control implementation of a Bezier curve (see references below). The project will exploit a humanoid robot and a Lenovo Phab2-pro mobile phone, both available at Idiap. The Phab2-pro works with Tango, the augmented reality interface toolkit from Google (see links below). A basic interface between the mobile phone and the robot is already available for the project (by using the ROS middleware).

The goal of the project will be to extend the existing approach to 3D paths, by developing an interface allowing the user to define 3D ellipsoids by defining the center and the three principal axes. This interface will then be used to move the robot hands to the desired paths with natural variations. The developed approach will be evaluated by comparing it to the baseline approach of defining a single Bezier spline through 3D coordinates. The effect of the variations will be evaluated by letting a group of users observe several repetitions of the movements with natural variations, and contrasting it with the repetition of a single trajectory.

Supervisors:
Dr Sylvain Calinon

Keywords:
robot learning, robot interface

Reference:
Berio, D., Calinon, S. and Leymarie, F.F. (2017). Generating Calligraphic Trajectories with Model Predictive Control. In Proc. of the 43rd Conf. on Graphics Interface.

Links:
https://www.ald.softbankrobotics.com/en/cool-robots/pepper
http://www3.lenovo.com/us/en/smart-devices/-lenovo-smartphones/phab-series/Lenovo-Phab-2-Pro/p/WMD00000220
https://developers.google.com/tango/
http://calinon.ch/paper3058.htm

Pose sketching interface to control a humanoid robot

The aim of the project is to create an interface allowing a user to draw a stick figure corresponding to the pose of a humanoid robot (arms and head), which is then used to move the robot to the desired pose. The project will exploit a humanoid robot and a Lenovo Phab2-pro mobile phone, both available at Idiap. The Phab2-pro works with Tango, the augmented reality interface toolkit from Google (see links below). A basic interface between the mobile phone and the robot is already available for the project (by using the ROS middleware).

The first step of the project will be to develop an algorithm to transform the 2D sketch to the closest corresponding 3D pose. Existing algorithms developed in the context of computer graphics interfaces will be used as a starting point for this development. The second step will be to use this algorithm on the mobile phone to control a humanoid robot. The last step will be to evaluate the algorithm and the interface, by comparing it to the baseline approach of moving the robot articulations one-by-one to achieve a desired upper-body pose.

Supervisors:
Dr Sylvain Calinon

Keywords:
robot learning, robot interface

Links:
https://www.ald.softbankrobotics.com/en/cool-robots/pepper
http://www3.lenovo.com/us/en/smart-devices/-lenovo-smartphones/phab-series/Lenovo-Phab-2-Pro/p/WMD00000220
https://developers.google.com/tango/

(MS or Semester project) Robotic microscopy platform integration

As part of Idiap’s ongoing effort to develop a microscopy platform that combines a custom light-sheet fluorescence microscope (LSFM or SPIM), a 4D moving stage, multispectral lasers/LEDs, and robotic arms to acquire data from moving objects, with moving sensors and illumination, we are creating a multi-modal abstraction layer over the hardware in order to easily discover and reproduce acquisition and processing techniques, in particular: structured illumination, time-lapse cardiac blood flow imaging, three-dimensional reconstruction, deblurring, computer vision, temporal and spatial superresolution. The student will help develop a programming interface connecting sensors, sources and moving elements with machine learning and signal processing algorithms and investigate acquisition protocols for microscopy.

Domains: computer programming, electronics, arduino, optics, big data, machine learning
Contact: adrian.shajkofci@idiap.ch, michael.liebling@idiap.ch

(MS or Semester project) Superresolution methods in Optical Projection Tomography (OPT)

Optical projection tomography is a form of tomography involving optical microscopy. It is in many ways the optical equivalent of X-ray computed tomography or the medical CT scan. Essential mathematics and reconstruction algorithms used for CT and OPT are similar; for example, radon transform or iterative reconstruction based on projection data. Both medical CT and OPT compute 3D volumes based on transmission of the photon through the material of interest. OPT is popular due to the common availability of optical components. The drawbacks of this method are a lack of spatial resolution, due to the requirement of using low numerical aperture optics, and a lower temporal resolution, due to the requirement for images from multiple views. Using computational methods such as deconvolution, guided interpolation, structured illumination and deep learning, the student will investigate new superresolution techniques both in terms of spatial resolution, and temporal resolution of time-lapse movies.

Domains: signal processing, deep learning
Contact: adrian.shajkofci@idiap.ch, michael.liebling@idiap.ch