Apple Releases Compact AI Models OpenELM for Offline Use

0 0
Read Time:2 Minute

Apple Enters the AI Marketplace

Apple, a major player in the technology industry, has recently made significant strides in the competitive artificial intelligence (AI) marketplace. By introducing eight compact AI models collectively known as OpenELM, the company has demonstrated its commitment to innovation and staying at the forefront of technological advancements.

OpenELM: A Game-Changer in AI

Published on the open-source AI community platform Hugging Face, Apple’s OpenELM models come in various versions ranging from 270 million to 3 billion parameters. These models are designed to run efficiently on devices, including smartphones, and can operate offline, making them ideal for a wide range of applications.

Users have the flexibility to choose between pre-trained or instruction-tuned versions of Apple’s OpenELM models. The pre-trained models provide a solid foundation for further customization and development, while the instruction-tuned models come equipped to interact with end users, making them suitable for conversational AI applications.

While Apple has not specified particular use cases for these models, they have the potential to power virtual assistants capable of tasks such as analyzing emails, providing intelligent suggestions, and engaging in meaningful interactions with users. This strategic move by Apple mirrors similar efforts by tech giants like Google, which deployed its Gemini AI model on its Pixel smartphone lineup.

Shared Resources and Collaborative Development

The models developed by Apple were trained using publicly available datasets, emphasizing transparency and open collaboration in the AI community. Apple has made the code for CoreNet, the library used to train OpenELM, as well as the “recipes” for its models, accessible to users. This allows developers to gain insights into the model-building process and further enhance the capabilities of these AI tools.

Microsoft’s recent announcement of Phi-3, a family of small language models capable of local execution, positions the tech giant as a competitor in the AI landscape. With models like Phi-3 Mini, Microsoft has demonstrated the ability to handle large volumes of data and deliver fast token generations, rivaling industry benchmarks like GPT-4.

Apple’s Future in AI Integration

While Apple has not yet incorporated its new AI language models into consumer devices, speculations suggest that the forthcoming iOS 18 update may introduce advanced AI features that prioritize on-device processing for enhanced user privacy. Apple’s unique hardware features, such as the integration of device RAM with GPU video RAM, provide a competitive advantage for local AI applications compared to Windows devices.

Despite these advantages, Apple faces challenges in AI development compared to rival platforms like Windows and Linux, which have established ecosystems for AI applications. The transition away from Nvidia hardware to Apple’s proprietary chips has led to a scarcity of Apple-native AI development, necessitating additional layers of translation and complexity for AI implementation on Apple products.

Image/Photo credit: source url

About Post Author

Chris Jones

Hey there! 👋 I'm Chris, 34 yo from Toronto (CA), I'm a journalist with a PhD in journalism and mass communication. For 5 years, I worked for some local publications as an envoy and reporter. Today, I work as 'content publisher' for InformOverload. 📰🌐 Passionate about global news, I cover a wide range of topics including technology, business, healthcare, sports, finance, and more. If you want to know more or interact with me, visit my social channels, or send me a message.
Happy
Happy
0 %
Sad
Sad
0 %
Excited
Excited
0 %
Sleepy
Sleepy
0 %
Angry
Angry
0 %
Surprise
Surprise
0 %