Skip to main content

Machine Learning Architect - Conversational Speech

**Weekly Hours:** 40

**Role Number:** 200653966-0836

**Summary**

The Speech organization within Siri drives major speech recognition, synthesis, and speech-to-speech model advances for features deeply embedded throughout Apple's ecosystem. Our mission is to build cutting-edge infrastructure, datasets, and models that empower Siri conversational AI, dictation, and speech-enabled Apple Intelligence features across natural language understanding, dialog generation, speech recognition, and multimodal interaction. We apply these technologies to create engaging, intelligent, and personalized conversational experiences for millions of Apple users.

We are seeking a Machine Learning Architect to serve as a senior technical leader spanning the full Speech organization. You will set the future modeling direction for all of conversational speech—charting the architectural and algorithmic course for how Apple's speech technologies evolve. You will operate as a hands-on expert who not only defines strategy but also digs into the hardest technical problems, working shoulder-to-shoulder with teams to overcome critical obstacles. Reporting directly to the Speech organization leadership, you will have broad visibility and influence across speech recognition, synthesis, dialog, multimodal foundation models, and speech-to-speech systems, ensuring coherent technical vision and cross-team alignment.

**Description**

As the Machine Learning Architect for Conversational Speech, you will define modeling strategy and technical direction across the Speech organization, establishing a unified architectural vision for speech recognition, speech synthesis, dialog systems, multimodal foundation models, and speech-to-speech technologies. You will serve as the organization's foremost modeling expert, providing deep technical guidance to multiple teams working on interconnected speech capabilities. You will evaluate emerging research and industry trends—including advances in large language models, multimodal architectures, and full-duplex natural conversational systems—and translate them into actionable roadmaps. You will champion production-readiness, ensuring architectural decisions account for on-device constraints, latency, scalability, and robustness. You will collaborate broadly with partner teams across Siri, Apple Intelligence, hardware, and platform engineering to ensure speech modeling investments are well-integrated into Apple's broader AI strategy.

**Minimum Qualifications**

+ 10+ years of experience in machine learning applied to speech or multimodal systems, with progressively increasing technical scope and leadership.

+ Demonstrated expertise as a technical leader or architect who has defined modeling direction across multiple teams or product areas.

+ Deep, hands-on proficiency in modern deep learning, including large language models and end-to-end speech systems.

+ Significant experience with multimodal LLMs, including architecture design, training, adaptation, and deployment of models that integrate speech, audio, and text modalities.

+ Direct experience building speech-to-speech conversational systems, with a strong understanding of full-duplex natural conversational interaction and end-to-end speech pipelines.

+ A track record of translating research into production-quality systems at scale.

+ Expert programming skills in Python and deep learning frameworks such as PyTorch, JAX, or TensorFlow.

**Preferred Qualifications**

+ Ph.D. in Computer Science, Electrical Engineering, Machine Learning, or a closely related field.

+ Experience architecting or leading development of full-duplex natural conversational systems, speech-to-speech models, or multimodal foundation models that have shipped to large-scale user populations.

+ Deep familiarity with the full stack of speech technologies—ASR, TTS, spoken dialog, speaker modeling, audio understanding—and an ability to reason about their interactions and dependencies.

+ Experience with large-scale distributed training and the infrastructure considerations that shape model design at scale.

+ A data-centric perspective on foundation model development, including experience guiding data collection, curation, annotation, and quality strategies.

+ Experience with on-device ML deployment, including model compression, quantization, and latency-aware architecture design.

Apple is an equal opportunity employer that is committed to inclusion and diversity. We seek to promote equal opportunity for all applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, Veteran status, or other legally protected characteristics. Learn more about your EEO rights as an applicant (https://www.eeoc.gov/sites/default/files/2023-06/22-088\_EEOC\_KnowYourRights6.12ScreenRdr.pdf) .


Similar jobs