ChatGPT Plus members have access to some exciting new features. These features allow them to perform file uploads and conduct analyses on those files. Additionally, members can now utilize different modes, such as Browse with Bing, without the need for manual selection. The chatbot itself decides when these modes should be applied based on context.
OpenAI is actively rolling out these fresh beta features exclusively for ChatGPT Plus subscribers. Users have reported that this update includes the capability to upload files and engage in multimodal interactions. Essentially, this means users won’t have to manually choose modes like “Browse with Bing” from a dropdown menu. Instead, the chatbot intelligently determines which mode to use based on the ongoing conversation.
These new features essentially bring some of the functionality previously available only in the ChatGPT Enterprise plan to the standalone individual ChatGPT subscription. While I don’t have access to the multimodal update in my own ChatGPT Plus plan, I was able to test the Advanced Data Analysis feature, which appears to work as expected. When you provide a file to ChatGPT, it takes a few moments to process the information before it becomes ready to work with. After this processing, the chatbot can perform tasks such as summarizing data, answering questions, or generating data visualizations based on prompts.
Importantly, the chatbot isn’t restricted to just working with text files. In a user’s shared experience on Threads, they uploaded an image of a capybara and asked ChatGPT to create a Pixar-style image based on it using DALL-E 3. Following this, they uploaded another image, this time featuring a wiggly skateboard, and requested the chatbot to incorporate it into the previous image. Interestingly, it decided to add a hat to the skateboard, showing the chatbot’s creative and adaptive capabilities.