In this interdisciplinary project, we create models of the world, conduct visual data analysis to understand them better, and use machine learning models to yield optimized Brain-computer interfaces (BCI), visualization representations and interfaces, as well as robotic task planning.
This overarching research direction is split into two PhD project topics.
PhD Project 1: Context-specific Grasping Control and Adaptive Visual Interfaces
In robotic manipulation, human-robot, and human-computer interaction, recent research endeavors are concentrated on advancing context-specific grasping control and adaptive visual interfaces. These interfaces incorporate intricate feedback mechanisms utilizing 2D/3D visualizations on computer screens and augmented reality overlays on physical objects. A critical aspect of this research involves the incorporation of multimodal representations, spanning from neural to behavioral patterns associated with the object interaction task, and integrating visual, auditory, and tactile sensory inputs to construct a comprehensive model of the environment. Developing and analyzing such models further refine our understanding of interaction-specific properties between (i) the human or the robotic hand and (ii) the objects or a visualization, enabling nuanced control strategies.
Both in the context of brain-computer interfaces (BCI) and visualization, leveraging multimodal data and machine learning methods can be employed to enable real-time adaptation and optimization. This multidisciplinary project aims to propel the scientific frontier, fostering the integration of non-invasive BCI, computer vision, and machine learning for more sophisticated and context-aware human-machine collaborations and visualization interfaces.
The goal of this project is to address these questions:
- What is the impact of context on the neural correlates of movement? How can a computational model capture these context-related features?
- To what extent are multimodal (e.g., neural and behavioral) patterns resembling during context-specific tasks? How can data visualization techniques be used most effectively to analyze this?
- How can the obtained multimodal representations and the identification of intent enable the dynamic adaptation of a visualization? Can we go beyond adaptively showing controls that allow users to make certain changes or even carry them out automatically (e.g., for rotating a 3D representation)? Would it be beneficial to even switch between whole different visual representations (like switching from a 3D volumetric representation to a 2D slice-based view for medical data analysis)?
- Can adaptive visual interfaces modulate the motor decision-making processes in the presence of multiple competing, potential actions? What is the interplay between the perception of action opportunities and other processes such as action selection, planning, and execution?
PhD Project 2: Neural Task Planning for Optimizing Visualization and Robot Interaction
We will leverage the power of Large Multimodal Models (LMMs) for continual task planning/modeling and visualization design. In robotic domains and visualization alike, a task is often specified in various forms, such as language and visual instructions. We aim to develop a multimodal model that accommodates both textual and visual modalities, overcoming issues of conventional approaches: reliance on complex programming and data collection processes–coupled with limited adaptability and scalability and the involvement of domain experts to explicitly train the robot or adapt a visualization for specific tasks and integrate (rule-based) mechanisms for explanation. To achieve this, we employ LMMs to act as a continual task planner and visualization designer. Our model takes natural task descriptions and the current state as input and generates a hierarchical task plan or visualization as output.
The goal of the project is to address these questions:
- How can we design a process of prompt learning/engineering that allows LMMs to generate task plans that align with the robot’s physical capabilities and motion primitives?
- How can we employ LMMs to transform natural task descriptions into visualizations that enable humans to efficiently address the respective analysis task?
- How can the recursive decomposition of tasks into subtasks be optimized to minimize hallucinations in task planning by LMMs?
- In both simulated and real-robot environments, in what ways can virtual scenarios improve the adaptability and execution success of task plans generated by LMMs? What role does the iterative generation of synthetic images play in enhancing the accuracy and success rate of hierarchical task planning? How can we effectively integrate such a synthetic model into the task-planning process to improve the robot’s predictive capabilities?
Organisation
Founded in 1614, the University of Groningen enjoys an international reputation as a dynamic and innovative institution of higher education offering high-quality teaching and research. Flexible study programs and academic career opportunities in a wide variety of disciplines encourage the 31,000 students and researchers alike to develop their own individual talents. As one of the best research universities in Europe, the University of Groningen has joined forces with other top universities and networks worldwide to become a truly global center of knowledge.
Within the Faculty of Science and Engineering, a 4-year PhD position is available at the Bernoulli Institute for Mathematics, Computer Science, and Artificial Intelligence. The two research topics are highly interdisciplinary: (i) context-specific grasping control and adaptive visual interfaces and (ii) neural task planning for optimizing visualization and robot interaction.
The candidates would become members of the Cognitive Modeling, Autonomous Systems, and Computer Vision as well as Scientific Visualization and Computer Graphics groups of the Bernoulli Institute for Mathematics, Computer Science, and Artificial Intelligence, working under the supervision of Dr. Andreea Sburlea, Dr. Hamidreza Kasaei and Dr. Steffen Frey.
Qualifications
The successful candidate should
- MSc degree in Artificial Intelligence, Computer Science, Robotics, Biomedical Engineering, Machine Learning, Computational Cognitive Science, or a related field.
- Strong Background in at least one of the following: robotics, machine learning, deep learning, computer vision, computer graphics/ visualization, neuroscience, biomedical engineering.
- Proficient programming skills in Python, Matlab, or C++ (experience with ROS is a plus).
- Experience working with foundational models, such as LLMs, VLMs, and LMMs, or experience with collecting and analyzing EEG/EMG data is desirable.
- Experience with mathematical optimization is a plus.
- Initiative to drive independent research.
- Proficiency in both verbal and written English.
We are particularly interested in candidates who are motivated and enthusiastic about contributing to an international research team. Applicants with a strong track record in neuroscience, biomedical engineering, machine learning, robotics, or computer vision/graphics are especially encouraged to apply.
Organisation
Conditions of employment
We offer you, following the Collective Labour Agreement for Dutch Universities
- A salary of € 2,872 gross per month in the first year, up to a maximum of € 3,670 gross per month in the fourth and final year for a full-time working week.
- A holiday allowance of 8% gross annual income and an 8.3% year-end bonus.
- A full-time position (1.0 FTE). The successful candidate will first be offered a temporary position of one year with the option of renewal for another three years. Prolongation of the contract is contingent on sufficient progress in the first year to indicate that a successful completion of the PhD thesis within the next three years is to be expected. A PhD training programme is part of the agreement and the successful candidate will be enrolled in the Graduate School of Science and Engineering.
Application
The application should include
- Letter of motivation.
- CV (including contact information for at least two academic references).
- Transcripts from your bachelor’s and master’s degrees.
You may apply for this position until 4 November 11:59pm / before 5 November 2024 Dutch local time (CET) by means of the application form (click on “Apply” below on the advertisement on the university website).
Applications received before 5 November 2024 will be given full consideration; however, the position will remain open until it is filled.
The University of Groningen strives to be a university in which students and staff are respected and feel at home, regardless of differences in background, experiences, perspectives, and identities. We believe that working on our core values of inclusion and equality are a joint responsibility and we are constructively working on creating a socially safe environment. Diversity among students and staff members enriches academic debate and contributes to the quality of our teaching and research. We therefore invite applicants from underrepresented groups in particular to apply. For more information, see also our diversity policy webpage: https://www.rug.nl/(…)rsity-and-inclusion/
Our selection procedure follows the guidelines of the Recruitment code (NVP): https://www.nvp-hrnetwerk.nl/nl/sollicitatiecode and European Commission’s European Code of Conduct for recruitment of researchers: https://euraxess.ec.europa.eu/jobs/charter/code
We provide career services for partners of new faculty members moving to Groningen.
Unsolicited marketing is not appreciated.
Information
For information you can contact:
- Dr. Andreea Sburlea, [email protected]
- Dr. Hamidreza Kasaei, [email protected]
- Dr.Steffen Frey, [email protected]
Please do not use the e-mail address(es) above for applications.