ChatGPT cutoff date
However, the biggest downside of OpenAI’s ChatGPT Plus subscription is the cutoff date of April 2023. In the fast-moving IT world, this is an eternity. OpenAI’s CEO Sam Altman has promised to keep ChatGPT up-to-date. However, I doubt that this is technically doable.
ChatGPT can use Bing to obtain up-to-date information. However, this process slows down the chatbot’s response time even more. The accuracy of the results heavily depends on the contents of the top-ranked pages. Sometimes, ChatGPT fails to recognize the need for additional information and doesn’t prompt a search on Bing. It then refers to either its cutoff date or offers an outdated response.
Updating a large language model (LLM) remains a Herculean task. The process of training a new model demands substantial time and resources, needing the computational capacity of whole data centers. This makes it alarmingly costly. Despite the emergence of novel AI chips, it’s questionable whether Language Learning Models (LLMs) will ever rival traditional search engines in offering access to newly sourced information.
Intriguingly, the primary impetus behind this development is the generative AI itself. The production of content has become more rapid and effortless with AI models being utilized for research, crafting text, fact-verification, and proofreading. GenAI is influencing the pace of content creation just as the switch from typewriters to word processors did years back. Thus, I foresee a surge in the volume of internet content in the imminent years.
Consequently, even traditional search engines, where the most time-intensive task is crawling, will find it challenging to keep pace with the expansion of the web. The training of a model can’t commence until the content is gathered from across the internet, and the volume of content required to build a robust LLM is equivalent to the web’s size.
The future of expertAI
AI companies face significant challenges due to major resistance from online publishers. Initially, companies like OpenAI were able to easily gather content from the web for their models. The benefit for most site owners was having their content included in Google’s index, which assured constant search traffic. However, the rise of large language models (LLMs) threatens this model. If users can receive full answers from a chatbot, they may stop using Google, leading to decreased web traffic.
Frustrations are escalating as publishers feel like OpenAI has appropriated their content and is using it against them, potentially driving them out of business. It’s known that the New York Times sued OpenAI and Microsoft. While the result remains unknown, it’s likely that many publishers will block AI crawlers. Several renowned IT publishers have already adjusted their robots.txt. If your site depends on Google traffic, consider integrating this list into your robots.txt. As reputable publishers withdraw their content, LLMs are going to face an increasing problem with generating realistic content.
The idea proposed by OpenAI to compensate publishers for their content seems unsustainable. As mentioned, LLMs need a massive amount of content to function properly. The less content available, the higher the probability of generating inaccurate information. Even the combined resources of Microsoft, Google, and Meta are not sufficient to buy the entire internet.
Given the recent frenzy, it’s unlikely that Large Language Models (LLMs) will not have a place in the future. The “gen” in genAI implies generative, but can also imply “general purpose.” In my view, genAI does not have a future in this context. Rather, we will likely be seeing numerous new models in the coming months that specialize in specific domains. I would consider these as expertAIs. LLMs are specifically created to be exceptional at language processing and generation, distinguishing them from large content models.
This leads us back to 4sysops AI. It’s currently being updated with our most recent IT content. While it’s powered by GPT-4 Turbo, the responses it provides may vary from the version developed by OpenAI. At this point, it’s in the early development stages, and it may be some months before significant performance differences are noticeable.
To give you an idea of what’s to come, I’ve tested this prompt with ChatGPT 4:
How to block NTLM in Windows Server 2025?
ChatGPT 4 started with this answer:
As of my last update in April 2023, Windows Server 2025 had not been released, and details about its features, settings, or how to manage them, such as blocking NTLM (NT LAN Manager), were not available. However, the process for managing NTLM in Windows Server has remained relatively consistent over the years, with changes primarily in the interface or additional policy settings.
ChatGPT 4 cutoff day is April 2023
It then tried to guess the steps required to complete the task.
This is how the answer of 4sysops AI begins:
To block NTLM authentication in Windows Server 2025, you can utilize Group Policy settings and PowerShell commands. Here’s how you can approach this:
One of the benefits of using 4sysops AI is that you will always have access to the latest and most advanced AI models. OpenAI frequently releases new models that are not yet available to ChatGPT Plus subscribers. If the
rumors about Google’s Gemini 1.5
are accurate, we will switch 4sysops AI to this new AI leader. In any case, 4sysops AI is the best AI for IT content.
Accessing 4sysops AI
4sysops AI is currently only accessible to members. Although this service is free for now, it may not remain so in the future. High-ranking active 4sysops members are likely to receive a substantial discount. You can apply for membership here. Please note that you must fill out the Bio in IT field because only IT professionals are eligible to become 4sysops members.
To access the 4sysops AI, first sign in to your account. Then, click on your avatar in the top menu. In your profile, you will find a new tab for your personal chatbot for 4sysops AI, which I call “discussbot”. Please note that other members cannot see your conversations with 4sysops AI. However, it’s important to remember that administrators have access to your chats.
Click the ‘Start a New Conversation’ button to save a conversation. This feature sets it apart from the other two chatbots available on 4sysops. The chatbot at the end of each article (endbot) is designed to be aware of the content you just read, allowing you to refer directly to the text. However, please note that these conversations will not be added to the chatbot in your profile (discussbot). Another difference between the two chatbots is that endbot currently uses GPT 3.5 Turbo, resulting in faster response times.
Please note that the chatbot I call “popbot” is available on every 4sysops page. You can access it by clicking the 4 icon at the bottom left of each page. If you feel that discussbot is slow, you can switch to popbot. You can enlarge the chat window to get more space.
Keep in mind that 4sysops AI may seem slower than ChatGPT, but this is because ChatGPT uses a typewriter mode, whereas 4sysops AI, like Google’s Gemini, always delivers the entire text in one piece. In fact, 4sysops AI should be faster than ChatGPT because OpenAI’s turbo editions power it. All three bots are continuously updated with new IT content.
To conclude this article, I would like to share two new features that have been added since the initial announcement of 4sysops AI. Firstly, you can now use voice communication with 4sysops AI. Click on the microphone icon and start speaking. Secondly, your avatar will now appear in the conversation to make distinguishing between prompts and responses easier.