Longer Context Available: 4096 tokens
We're committed to providing a high-quality, free service to our user community, and it's the generous support of our premium subscribers that makes this possible for everyone.
We're excited to announce an important upgrade exclusively for our True Supporter ($14.95) and I'm All In ($24.95) subscribers. Over the past few weeks, we've integrated more advanced GPUs and leveraged recent advancements in the field. As a result, we're now able to offer a doubled context size for a more comprehensive user experience.
For those unfamiliar, Large Language Models (LLMs) operate within a finite token limit—traditionally set at 2048 tokens (roughly 1500 words). This limit dictates the amount of conversational history we can utilize when generating responses. By expanding this limit to 4096 tokens (approximately 3000 words), the model can now consider a wider range of previous interactions, enabling more context-rich and accurate responses.
We hope our premium subscribers enjoy this enhanced capability. And for our free users, know that it's thanks to our supporters that we can continue to innovate and make improvements across our platform.