How to Set Up a Rasa Development Environment in SageMaker Studio Lab with VS Code: Complete Step by Step Guide for Beginners
By Braincuber Team
Published on April 1, 2026
Chatbot development with Rasa is an exciting journey, but getting started can quickly become frustrating when your local machine struggles to handle the resource-intensive machine learning workloads. This beginner guide will walk you through setting up a completely free, cloud-based development environment using AWS SageMaker Studio Lab and code-server so you can build powerful chatbots without investing in expensive hardware.
By the end of this complete tutorial, you will have a fully functional Rasa development environment running in the cloud with a full VS Code interface, 15 GB of persistent storage, and enough compute power to train complex NLU models. This step by step guide is designed for beginners with no prior cloud development experience.
What You'll Learn:
- Why local Rasa development can be challenging
- How AWS SageMaker Studio Lab provides free ML computing resources
- Setting up a Conda environment with Rasa and dependencies
- Installing and configuring code-server for VS Code in the cloud
- Exporting your environment for easy sharing and recreation
- Initializing, training, and running your first Rasa chatbot
Step 1: Understanding the Challenge and Solution
Rasa is a powerful open-source framework for building conversational AI chatbots. It gives you complete flexibility, from designing conversation flows to developing NLU (Natural Language Understanding) logic to deployment. However, the initial Rasa bot setup can be surprisingly resource-intensive.
A typical laptop with 4 cores and 16GB RAM may struggle to run Rasa smoothly, with the bot crashing during training or even basic operations. This is because Rasa involves machine learning model training, which requires significant computational resources.
Local Development Challenges
Limited RAM and CPU, crashes during training, no persistent cloud backup, hardware upgrade costs.
Cloud-Based Solution
Free ML platform, 15GB persistent storage, powerful CPU/GPU sessions, VS Code interface via code-server.
The ideal solution requires a free Machine Learning experimentation platform with VS Code support. After evaluating several options, the best approach combines AWS SageMaker Studio Lab for compute power with code-server for the VS Code interface.
| Tool | Purpose | Key Benefit |
|---|---|---|
| AWS SageMaker Studio Lab | Cloud compute platform | Free, 15GB storage, 12hr CPU / 4hr GPU sessions |
| Conda | Environment management | Isolated Python environments, easy sharing |
| Rasa | Chatbot framework | Open-source, flexible, production-ready |
| Code-server | VS Code in browser | Familiar IDE, runs in cloud browser |
Step 2: Setting Up AWS SageMaker Studio Lab
AWS SageMaker Studio Lab is a free machine learning experimentation platform provided by Amazon Web Services. It is an excellent choice for running Rasa ML workloads because it offers generous free resources without requiring a credit card.
15 GB Persistent Storage
Your files, environments, and projects are saved between sessions. No need to reinstall everything each time.
12-Hour CPU Sessions
Run complex algorithms and model training for up to 12 hours on a CPU runtime. Perfect for Rasa NLU training.
4-Hour GPU Sessions
For deep learning workloads, access GPU-accelerated sessions for up to 4 hours. Useful for advanced Rasa DIET configurations.
Important Note
SageMaker Studio Lab provides a JupyterLab server interface, not a full development IDE. To get the VS Code experience we need for Rasa development, we will install code-server on top of it. This gives us the best of both worlds: powerful cloud compute with a familiar IDE interface.
To get started, sign up for an AWS SageMaker Studio Lab account at studiolab.sagemaker.aws. The sign-up process is free and does not require an AWS credit card. Once approved, you will have access to your Studio Lab project environment.
Step 3: Creating the Conda Environment
Let us set up a clean, isolated Python environment for Rasa development. We will use Conda, which comes pre-installed in SageMaker Studio Lab, to manage our dependencies.
Step 3.1: Create and Activate the Environment
Open a terminal in your Studio Lab environment, create an empty folder for your project, and navigate into it. Then run the following commands:
conda create --name rasa-env python==3.8
conda activate rasa-env
These commands create a new Conda environment named rasa-env with Python 3.8, which is a well-supported version for Rasa. The second command activates the environment so all subsequent package installations go into this isolated space.
Step 3.2: Fix Pip Issues
In some Conda environments, pip may have compatibility issues. To ensure a smooth Rasa installation, uninstall and reinstall pip with the following commands:
python -m pip uninstall pip
python -m ensurepip
python -m pip install -U pip
python -m pip install --upgrade setuptools
This sequence removes the existing pip installation, reinstalls it using ensurepip, upgrades it to the latest version, and ensures setuptools is up to date. These steps prevent common installation errors when installing Rasa.
Step 3.3: Install Rasa
Now install the Rasa framework:
python -m pip install rasa
This command installs Rasa and all its dependencies including TensorFlow, spaCy, and other ML libraries needed for NLU training and dialogue management. The installation may take several minutes depending on your network speed.
Optional: SSH Access
If you need SSH access for Git operations, run conda install openssh in your environment. This enables secure Git repository cloning and pushing via SSH keys.
Step 4: Installing Code-Server for VS Code
Now we install the VS Code server directly into our Conda environment. Code-server is a Python library that lets you run a full VS Code instance accessible through your web browser.
conda install -y -c conda-forge code-server
This command installs code-server from the conda-forge channel. After this step, your Conda environment rasa-env contains everything you need: Rasa, code-server, and optionally OpenSSH for Git access.
| Component | Installation Command | Purpose |
|---|---|---|
| Python 3.8 | conda create --name rasa-env python==3.8 |
Base Python runtime for Rasa |
| Rasa | pip install rasa |
Chatbot development framework |
| Code-server | conda install -c conda-forge code-server |
VS Code accessible via browser |
| OpenSSH (optional) | conda install openssh |
SSH access for Git operations |
Step 5: Exporting Your Environment for Sharing
Since we are maintaining our Rasa code in a Git repository, we want to make it easy for others (or our future selves) to recreate this exact environment without going through the same setup hassle. Conda provides a simple way to export your environment configuration.
conda env export --file environment.yml
This generates an environment.yml file, which is the Conda equivalent of a requirements.txt file. It contains a complete list of all packages and their exact versions installed in your environment.
Anyone who clones your repository can recreate the exact environment with a single command:
conda env create -n rasa-env -f environment.yml
This approach makes your project highly portable and reproducible. Team members can set up identical development environments in seconds rather than hours.
Step 6: Launching the VS Code Server
Now that our environment is fully configured, it is time to launch the VS Code server and start developing. Open a new terminal in your Studio Lab environment and run:
code-server --auth none
The --auth none flag disables authentication, making the VS Code server accessible without a password. You can also start code-server with authentication enabled, which requires a password to access the VS Code server for added security.
Accessing VS Code Through Your Browser
After running the code-server command, you will see logs in the terminal. Now you need to construct the correct URL to access VS Code:
Copy Your Studio Lab URL
Your Studio Lab URL looks like: https://xxxxxxxxxxxxxxxxxx.studio.us-east-2.sagemaker.aws/studiolab/default/jupyter/lab
Replace the Path
Replace /lab at the end with /proxy/8080/ to create your VS Code server URL.
Open in a New Tab
Enter the new URL in a separate browser tab. Wait 3-5 minutes for the server to fully start up.
Fix Blurred Terminal Text
Once VS Code loads, you may notice that the terminal text appears blurred. To fix this, go to VS Code settings and disable GPU acceleration for the terminal by setting terminal.integrated.gpuAcceleration to off. This resolves the rendering issue in the cloud environment.
Step 7: Initializing and Training Your Rasa Bot
With VS Code running in your browser, it is time to get hands-on with Rasa development. Let us create a sample project, train the model, and have a conversation with your bot.
Step 7.1: Activate the Environment
Open the integrated terminal in VS Code and activate your Rasa environment:
conda activate rasa-env
Step 7.2: Initialize a Sample Rasa Project
Run the following command to create a basic bot project:
rasa init
This command initializes a sample Rasa project in your current folder. It creates the basic bot structure including:
| File/Directory | Purpose |
|---|---|
data/nlu.yml |
Training data for Natural Language Understanding |
data/rules.yml |
Conversation rules for simple paths |
data/stories.yml |
Multi-turn conversation examples |
domain.yml |
Defines intents, entities, responses, and forms |
config.yml |
NLU pipeline and policy configuration |
actions.py |
Custom action code for complex logic |
Step 7.3: Train the NLU Model
Next, train the NLU model. This generates the machine learning model that your bot will use to understand user messages:
rasa train
This command generates the trained NLU model and saves it as a .tar.gz file in the models subfolder. Training may take a few minutes depending on the complexity of your NLU data and the compute resources available.
Step 7.4: Run Your Bot
Now for the moment of truth. Run the Rasa shell to interact with your bot:
rasa shell
If everything is set up correctly, your bot should be up and running in the VS Code terminal, which is running inside SageMaker Studio Lab. You can now type messages and have a conversation with your chatbot!
Success!
Your bot is now running in a cloud-based VS Code environment powered by SageMaker Studio Lab. You have a complete, free chatbot development setup that can handle compute-intensive ML workloads.
Step 7.5: Clean Up
When you are done with your development session, remember to clean up properly:
Stop the VS Code Server
Press Ctrl+C in the Studio Lab terminal where code-server is running to stop the VS Code server.
Stop the Studio Lab Runtime
Use the Studio Lab interface to stop your runtime session. This frees up resources for other users and ensures your session time is not wasted.
Resource Reminder
Always stop your Studio Lab runtime when you are done. CPU sessions last up to 12 hours and GPU sessions up to 4 hours. Stopping early preserves your remaining session time for future development.
Summary
In this complete tutorial, you learned how to:
Set up AWS SageMaker Studio Lab
A free cloud ML platform with 15GB storage and powerful CPU/GPU sessions for chatbot development.
Create a Conda Environment
Isolated Python 3.8 environment with Rasa, pip fixes, and all ML dependencies properly configured.
Install Code-Server for VS Code
Full VS Code IDE running in your browser, giving you a familiar development experience in the cloud.
Export and Share Your Environment
Generate environment.yml for easy recreation and team collaboration with a single command.
Build and Run Your First Rasa Bot
Initialize a project, train the NLU model, and have a conversation with your cloud-hosted chatbot.
You now have a completely free, cloud-based chatbot development environment where you can run compute-intensive workloads through the VS Code interface. No expensive hardware required, no local resource constraints, and your work persists between sessions.
Next Steps
Explore Rasa's NLU training pipeline customization, build custom actions in Python, integrate with messaging platforms like Slack or Telegram, and deploy your bot to production using Rasa X or Rasa Open Source deployment options.
Frequently Asked Questions
Is AWS SageMaker Studio Lab really free?
Yes, SageMaker Studio Lab is completely free. It provides 15 GB of persistent storage, 12-hour CPU sessions, and 4-hour GPU sessions at no cost. No credit card is required to sign up.
Why use code-server instead of GitHub Codespaces?
GitHub Codespaces provides a VS Code environment but cannot run compute-intensive ML training jobs. Code-server running on SageMaker Studio Lab gives you both the VS Code interface and the powerful compute resources needed for Rasa model training.
How do I fix blurred text in the VS Code terminal?
Go to VS Code settings and search for terminal.integrated.gpuAcceleration. Set it to off to disable GPU acceleration for the terminal, which resolves the blurred text rendering issue in cloud environments.
Can I share my Rasa environment with my team?
Yes! Run conda env export --file environment.yml to generate an environment file. Share this file with your team, and they can recreate the exact environment with conda env create -n rasa-env -f environment.yml.
What Python version should I use for Rasa?
Python 3.8 is recommended for Rasa development as it is well-supported and compatible with all Rasa dependencies. You can specify this when creating your Conda environment with conda create --name rasa-env python==3.8.
Need Help with AI Chatbot Development?
Our experts can help you design, build, and deploy intelligent chatbots using Rasa and other AI frameworks. From NLU training to production deployment, we have you covered.
