AI Impact Tour in Boston on March 27

0 0
Read Time:2 Minute

Abacus AI Unveils Uncensored Open-Source Model Tailored for System-Prompt Compliance

Abacus AI, a startup specializing in an AI-driven end-to-end machine learning (ML) and LLM Ops platform, has recently launched Liberated-Qwen1.5-72B, an uncensored open-source large language model (LLM) finely tuned to adhere to system prompts in various scenarios.

Derived from Qwen1.5-72B, a transformer-based decoder-only language model developed by researchers at Alibaba Group, Liberated-Qwen1.5-72B distinguishes itself by its unparalleled dedication to system instruction adherence. This feature sets it apart from other existing open-source LLMs, positioning it as an ideal choice for real-world applications.

The Significance of System Prompt Adherence in LLM Deployment

Enterprises today are increasingly integrating LLMs into a wide range of use cases, such as customer-facing chatbots. However, during prolonged multi-turn interactions, these AI models may stray off course and produce unexpected responses or actions. Such deviations can lead to undesirable outcomes, like legally binding commitments based on erroneous dialogue exchanges.

To counteract such challenges, ensuring strict compliance with system prompts has become a pivotal concern for AI developers. While many open-source LLMs fall short in this aspect, Abacus addresses this issue admirably with Liberated-Qwen1.5-72B.

Leveraging a novel open-source dataset known as SystemChat, comprised of 7K synthetic conversations generated using Mistral-Medium and Dolphin-2.7-mixtral-8x7b, Abacus honed the model to consistently respect system instructions, even when they conflicted with user input throughout the conversation.

Performance Evaluation and Future Endeavors

Abacus tested Liberated-Qwen1.5-72B on MT-Bench, revealing its superior performance compared to other open-source models in human evaluation metrics. On MMLU, assessing world knowledge and problem-solving aptitude, the model garnered a commendable score of 77.13, placing it among the top-performing LLMs, including Qwen1.5-72B and Abacus’ Smaug-72B.

Despite its impressive capabilities, it is essential to note that Liberated-Qwen1.5-72B lacks censorship mechanisms, permitting unrestricted responses to any queries while upholding system prompt compliance. Abacus advises users to implement their alignment layer before deploying the model as a service.

Currently licensed under tongyi-qianwen with plans for enhanced performance optimization and the release of more advanced models by combining the SystemChat dataset with Smaug’s training data, Abacus remains committed to refining Liberated-Qwen1.5-72B’s functionality for future applications.

Image/Photo credit: source url

About Post Author

Chris Jones

Hey there! 👋 I'm Chris, 34 yo from Toronto (CA), I'm a journalist with a PhD in journalism and mass communication. For 5 years, I worked for some local publications as an envoy and reporter. Today, I work as 'content publisher' for InformOverload. 📰🌐 Passionate about global news, I cover a wide range of topics including technology, business, healthcare, sports, finance, and more. If you want to know more or interact with me, visit my social channels, or send me a message.
Happy
Happy
0 %
Sad
Sad
0 %
Excited
Excited
0 %
Sleepy
Sleepy
0 %
Angry
Angry
0 %
Surprise
Surprise
0 %