• OpenAIs next-gen Orion model is hitting a serious bottleneck, acc

    From TechnologyDaily@1337:1/100 to All on Monday, November 11, 2024 19:15:04
    OpenAIs next-gen Orion model is hitting a serious bottleneck, according to a new report heres why

    Date:
    Mon, 11 Nov 2024 19:00:00 +0000

    Description:
    In certain areas, next-gen Orion is failing to impress compared to the existing GPT-4 model.da

    FULL STORY ======================================================================OpenAI is reportedly having trouble with Orion in certain areas like coding Progress is slower than expected due to quality issues with training data The next-gen model could also be more expensive

    OpenAI is running into difficulties with Orion, the next-gen model powering its AI. The company is struggling in certain areas when it comes to the performance gains realized with the successor to GPT-4 .

    This comes from a report by The Information , citing OpenAI employees, who claim that the increase in quality seen with Orion is far smaller than that witnessed when moving from GPT-3 to GPT-4.

    Were also told that some OpenAI researchers are saying that Orion isnt reliably better than its predecessor [GPT-4] in handling certain tasks. What tasks would they be? Apparently, coding is a weaker point, with Orion
    possibly not outdoing GPT-4 in this arena although it is also noted that Orions language skills are stronger.

    So, for general-use queries and for jobs such as summarizing or rewriting text it sounds like things are going (relatively) well. However, these
    rumors dont sound quite as hopeful for those looking to use AI as a coding helper. (Image credit: Shutterstock / Ascannio) So, whats the problem here?

    By all accounts, OpenAI is running into something of a wall when it comes to the data available to train its AI. As the report makes clear, theres a dwindling supply of high-quality text and other data that LLMs ( Large Language Models ) can work with in pre-release training to hone their powers in solving knottier problems like resolving coding bugs.

    These LLMs have chomped through a lot of the low-hanging fruit, and now finding this good-quality training data is becoming a considerably more difficult process slowing down advancement in some respects.

    On top of that, this training will become more intensive in terms of
    computing resources, meaning that developing (and running) Orion and further AI models down the line will become much more expensive. Of course, the user of the AI will end up footing that bill, one way or another, and theres even talk of more advanced models becoming effectively financially unfeasible to develop.

    Not to mention the impact on the environment in terms of bigger data centers whirring away and sucking more power from our grids, all at a time of increasing concern around climate change.

    While we need to take this report with an appropriate amount of caution,
    there are worrying rumblings here, foreshadowing a serious reality check for the development of AI going forward.

    The Information further notes that a different approach may be taken in terms of improving AI models on an ongoing basis after their initial training indeed, this may become a necessity from the sound of things. We shall see.

    Orion is expected to debut early in 2025 ( and not imminently, as some rumors have hinted ), and it may not be called ChatGPT -5, with OpenAI possibly set to change the naming scheme of its AI completely with this next-gen model.
    You might also like... OpenAI edges closer to making its first AI chip in bid to power your favorite new apps ChatGPT integration and image generation are finally here in the iOS 18.2 developer beta ChatGPT search is now live and
    it could be the Google replacement youve been looking for



    ======================================================================
    Link to news story: https://www.techradar.com/computing/artificial-intelligence/openais-next-gen-o rion-model-is-hitting-a-serious-bottleneck-according-to-a-new-report-heres-why


    --- Mystic BBS v1.12 A47 (Linux/64)
    * Origin: tqwNet Technology News (1337:1/100)