Meta just released a badass new LLM called Llama 2
And if you are anything like us, you just can’t wait to get your hands dirty and build with it
The first step to building with any kind of LLM is to host it somewhere and use it through an API. Then your developers can easily integrate it in your applications
Why should I use llama 2 when I can use Open AI API?
3 things:
- Security — keep sensitive data away from 3rd party vendors
- Reliability — ensure your applications have guaranteed uptime
- Consistency — get same results each time a question is asked
What will this guide cover
- Part I — Hosting the Llama 2 model on AWS sagemaker
- Part II — Use the model through an API with AWS Lambda and AWS API Gateway
If you want help doing this, you canschedule a FREE call with us at www.woyera.com where we can show you how to do this live. And yes, it is completely FREE!