Skip to main content

Machine Learning Research Engineer, SIML - ISE

**Role Number:** 200628549-0836

**Summary**

Join the team building the next generation of Apple Intelligence! The System Intelligence Machine Learning (SIML) organization is looking for a Machine Learning Research Engineer in the domain of multi-modal perception and reasoning. This is an opportunity to work at the core of Apple Intelligence, working across modalities such as vision, language, gesture, gaze, touch, etc. to enable highly intuitive and personalized intelligence experiences across the Apple ecosystem.

This role requires experience in vision-language models, and ability to fine-tune/adapt/distill multi-modal LLMs. You will be part of a fast-paced, impact-driven Applied Research organization working on cutting-edge machine learning that is at the heart of the most loved features on Apple platforms, including Apple Intelligence, Camera, Photos, Visual Intelligence, and more!

SELECTED REFERENCES TO OUR TEAM’S WORK:
https://www.youtube.com/watch?v=GGMhQkHCjxo&t=255s
https://support.apple.com/guide/iphone/use-visual-intelligence-iph12eb15...

**Description**

As a Machine Learning Research Engineer, you will help design and develop models and algorithms for multimodal perception and reasoning leveraging Vision-Language Models (VLMs) and Multimodal Large Language Models (MLLMs). You will collaborate with experienced researchers and engineers to explore new techniques, evaluate performance, and translate product needs into impactful ML solutions. Your work will contribute directly to user-facing features across billions of devices.

Your primary responsibilities include:
- Contribute to the development and adaptation of AI/ML models for multimodal perception and reasoning
- Innovate robust algorithms that integrate visual and language data for comprehensive understanding
- Collaborate closely with cross-functional teams to translate product requirements into effective ML solutions.
- Conduct hands-on experimentation, model training, and performance analysis
- Communicate research outcomes effectively to technical and non-technical stakeholders, providing actionable insights.
- Stay current with emerging methods in VLMs, MLLMs, and related areas

**Minimum Qualifications**

+ Master’s or Ph.D. in Computer Science, Artificial Intelligence, Machine Learning, or related field — or relevant industry experience

+ Proficiency in Python and deep learning frameworks such as PyTorch, or equivalent

+ Practical experience with training and evaluating neural networks

+ Familiarity with multimodal learning, vision-language models, or large language models

+ Strong problem-solving skills and ability to work in a collaborative, product-focused environment

+ Ability to communicate technical results clearly and concisely

**Preferred Qualifications**

+ Proven track record of research contributions demonstrated through publications in top-tier conferences and journals.

+ Background in multi-modal reasoning, VLM, and MLLM research with impactful software projects.

+ Solid understanding of natural language processing (NLP) and computer vision fundamentals.

Apple is an equal opportunity employer that is committed to inclusion and diversity. We seek to promote equal opportunity for all applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, Veteran status, or other legally protected characteristics. Learn more about your EEO rights as an applicant (https://www.eeoc.gov/sites/default/files/2023-06/22-088\_EEOC\_KnowYourRights6.12ScreenRdr.pdf) .


Similar jobs