The Audio Media Technology (AMT) is at the center of Apple’s innovative products, including the Mac, iPhone, iPad, Apple Watch, AirPods, Apple TV, macOS, iOS, watchOS and tvOS. AMT’s Core Audio team provides audio foundation for various high profile features like Siri, phone calls, Face Time, media capture, playback, and API’s for third party developers to enrich our platforms.
AMT is looking for passionate software engineers to help build the next generation of Sound Analysis machine listening software. In addition to a 3rd party framework, Sound Analysis powers a range of machine listening features across Apple’s ecosystem. Our highly cross-functional team works with experts across Apple who develop advanced machine learning AI algorithms to enable Apple products to better understand the world around them, while maintaining Apple’s industry-leading standards for privacy and security.
Excellent software design/programming skills in Swift, Objective-C and/or C/C++.
A passion for understanding end-to-end systems, from the user experience down to the hardware
Experience with machine learning for speech or audio applications
Experience developing real-time audio processing software
Proactive and passionate about learning new technologies
In this exciting role you will work cross-functionally with machine learning experts, system software engineers, hardware engineers, and designers to create the infrastructure and systems that enable machine listening experiences across Apple’s ecosystem. You will help develop new features from conception to release, with an eye toward Apple’s high standards for user experience, quality, and performance.