In this guide, we’ll create Python API and we’ll host it on an Ubuntu server. Starting with a basic Python function, we’ll evolve it into a full-fledged web service. Here’s what you can expect:
- Python Function: We begin at the foundation, crafting a simple function.
- Virtual Environment: Learn how to set one up on Ubuntu, ensuring a clean workspace.
- Flask: We’ll wrap our function with Flask, turning it into an API endpoint.
- Gunicorn: Dive into how Gunicorn serves our Flask app, enhancing its performance.
- Nginx: Discover how Nginx acts as the protective shield, making our API accessible to the world.
By the end, you’ll have a clear roadmap to create Python APIs, ready for real-world deployment. Let’s get started!
Table of Contents
Setting Up Nginx as the Public-Facing Web Server
Nginx is a powerful web server that can also act as a reverse proxy, load balancer, and more (see https://www.nginx.com/). In our example project, Nginx will serve as the public-facing layer, efficiently handling client requests and directing them to our application.
Why Use Nginx?
- Performance: Nginx is designed to handle many simultaneous connections with low memory usage.
- Flexibility: It can serve static files, act as a reverse proxy, and handle SSL/TLS encryption.
- Security: By placing Nginx in front, we can manage client requests, filter malicious traffic, and protect our application.
Installing Nginx on Ubuntu:
Update your package lists:
Install Nginx:
Start Nginx and enable it to run at boot:
Testing Nginx:
After installation, you can test if Nginx is running by navigating to your server’s IP address in a web browser. You should see the default Nginx landing page, indicating that it’s working correctly.
Starting with a Simple Python Function
Now that we have Nginx up and running, let’s shift our focus to the heart of our API: the Python function. This function will serve as the core logic that our API will expose.
For this guide, we’ll use a straightforward function that greets a user by their name:
When you call this function with a name, like greet("Alice")
, it will return Hello, Alice!
.
Setting Up a Virtual Environment on Ubuntu
With our Python function ready, it’s time to prepare the environment in which our API will come to life. This involves setting up a virtual environment, a crucial step to ensure our project remains isolated and free from potential conflicts.
What is a Virtual Environment?
A virtual environment is an isolated space where you can install Python packages without affecting the global Python installation. It’s like having a separate, clean room for each project, ensuring that dependencies don’t clash.
Why Use a Virtual Environment?
- Isolation: Keep project-specific dependencies separate, ensuring no conflicts.
- Version Control: Maintain different versions of packages for different projects.
- Clean Deployment: Ensure that your application runs the same way in production as it does in development.
Creating a Virtual Environment Inside the app’s root directory:
The /var/www directory is where all nginx applications normally live. Each application needs it own directory, so we’ll create one for our example as well:
Navigate to the sites-available
directory:
Create a sub-directory for your app:
Let’s name it myapi_app
.
Install the virtual environment package:
Initiate your virtual environment:
Here, we’re establishing a virtual environment named myapi_env
within our app’s directory.
Activate the virtual environment:
Once activated, you’ll notice a change in your terminal prompt, indicating that you’re now within the confines of the virtual environment.
Exiting the Virtual Environment (Optional):
Just for completeness (we won’t do it now): To step out of the virtual environment and revert to the global Python environment, simply execute:
Wrapping Up:
With our virtual environment nestled inside the sites-available/myapi_app
directory, we’ve secured a dedicated space for our API’s development. This isolation becomes even more crucial as we integrate additional components and dependencies. Next on our agenda is Flask, where our humble Python function will metamorphose into an accessible web endpoint.
Create Python API: Wrapping the Function with Flask
Having established a pristine environment, it’s time to introduce Flask to our setup. Flask is a micro web framework for Python, renowned for its simplicity and flexibility. It will serve as the bridge between our Python function and the web, allowing users to interact with our function via HTTP requests.
Why Flask?
Flask provides a straightforward way to define routes (URL patterns) that, when accessed, can execute Python functions and return results. It’s lightweight, easy to use, and perfect for our purpose of exposing a simple function as an API endpoint.
Integrating Flask into Our Project:
Install Flask:
With our virtual environment activated, we’ll install Flask.
Crafting a Basic Flask App: Let’s transform our greet
function into a Flask route.
Create a file named app.py
inside the myapi_app
directory and add the following content:
In this setup, accessing the URL /greet/Alice
would invoke the greet_user
function and return a JSON response: {"message": "Hello, Alice!"}
.
Running the Flask App:
With the Flask app ready, navigate to the directory containing app.py
and run:
This will start a development server, and you should be able to access the API locally at http://127.0.0.1:5000/greet/Alice
.
Configuring Nginx for the Flask App
Now that our Flask app is up and running, it’s time to configure Nginx to serve as the public-facing layer. Nginx will handle incoming requests and forward them to our Flask app.
Why Use Nginx with Flask?
- Performance: Nginx efficiently manages static content and can handle many simultaneous connections.
- Security: It acts as a protective barrier, filtering malicious traffic before it reaches our Flask app.
- Flexibility: Nginx can manage SSL/TLS encryption, load balancing, and more.
Setting Up Nginx for Our Flask App:
Navigate to the sites-available
directory:
Create a new configuration file for our app:
Let’s name it myapi_app
:
Add the following configuration:
Please replace your_domain_or_ip with you server’s public ip or better with a domain name if you have one.
This configuration tells nginx to listen for incoming requests to your_domain_or_ip on port 80 (the http default).
Enable the configuration:
Create a symbolic link from sites-available
to sites-enabled
:
Test the Nginx configuration:
It’s always a good practice to check for syntax errors:
If everything is okay, you’ll see a message indicating that the configuration test is successful.
Reload Nginx:
Apply the changes by reloading Nginx:
Conclusion:
With Nginx configured, our Flask app is now accessible via the domain or IP address specified in the configuration. Nginx will efficiently handle incoming traffic, forwarding relevant requests to our Flask app. This setup ensures optimal performance and security for our Python API.
Serving the Flask App with Gunicorn
While Flask’s built-in server is perfect for development, it’s not designed to be exposed to the internet or handle a large number of requests. Enter Gunicorn: a robust, pre-fork worker model equipped WSGI server that will serve our Flask app efficiently and safely.
Why Gunicorn?
- Performance: Gunicorn spawns multiple worker processes (or threads) to handle incoming requests, ensuring optimal utilization of system resources.
- Reliability: It’s battle-tested and widely used in production environments.
- Compatibility: Gunicorn is compliant with the WSGI standard, making it compatible with a wide range of web frameworks, including Flask.
Setting Up Gunicorn:
Install Gunicorn:
With our virtual environment activated (source /var/www/myapi_app/myapi_env/bin/activate
), we’ll install Gunicorn.
Run the Flask app with Gunicorn:
Navigate to the directory containing app.py
(/var/www/myapi_app
) and execute:
Here, -w 4
specifies that Gunicorn should use 4 worker processes. You can adjust this number based on your server’s CPU cores and expected traffic.
Binding to a Specific IP and Port (Optional):
By default, Gunicorn will bind to 127.0.0.1:8000
. If you want to bind it to a different IP or port (e.g., to match the port specified in the Nginx configuration), you can use the --bind
option:
Automating Gunicorn with Systemd
Ensuring that our Flask app runs consistently and recovers from potential failures is crucial for a production environment. By integrating Gunicorn with Systemd, we can achieve this level of reliability. Systemd will manage the Gunicorn process, ensuring it starts on boot and restarts if it ever fails.
Why Use Systemd with Gunicorn?
- Automatic Start: Systemd can start Gunicorn when the server boots up, ensuring our API is always available.
- Process Management: Easily start, stop, and restart the Gunicorn process using simple Systemd commands.
- Resilience: If Gunicorn crashes for some reason, Systemd can automatically restart it.
Setting Up Gunicorn with Systemd:
Create a Systemd Service File:
Navigate to the systemd system directory and create a new service file for our app:
Add the Following Configuration:
Replace your_username
with your actual username and adjust paths if they differ.
Which User would I use to run gunicorn?
When deploying a web application in a production environment, it’s a common practice to run the application server (in this case, Gunicorn) as a non-root, system user. This is a security best practice to minimize potential damage in case the application is compromised.
Here are the common choices:
- www-data: This is a standard user for web servers on many Linux distributions, including Ubuntu. It’s commonly used for web servers like Apache and Nginx. Running Gunicorn as
www-data
makes sense if you’re integrating with a web server, as it ensures file permissions are consistent. - Dedicated User: Some administrators prefer to create a dedicated user specifically for the application. This can provide more granular control over permissions and resources. For example, you could create a user named
myapiuser
just for your Flask app.
Start and Enable the Gunicorn Service:
The enable
command ensures that the Gunicorn service starts automatically on boot.
Managing the Service
- To check the status:
sudo systemctl status myapi_app
- To restart the service:
sudo systemctl restart myapi_app
- To stop the service:
sudo systemctl stop myapi_app
Conclusion:
With Gunicorn now managed by Systemd, our Flask app gains an added layer of reliability. Systemd will keep an eye on the Gunicorn process, ensuring it’s always running and serving our API. This setup provides peace of mind, knowing that our API remains resilient against unexpected issues.
Securing the API with HTTPS using Let’s Encrypt
In the modern web, HTTPS is becoming a standard, not just a luxury. It ensures the confidentiality and integrity of data as it travels between the client and server. Let’s Encrypt provides free SSL/TLS certificates, making it easier than ever to secure web services.
Why HTTPS?
- Data Security: Encrypts the data transferred between the client and server, protecting it from eavesdroppers.
- Trust: A padlock or green bar in the browser assures users that their connection is secure.
- SEO Benefits: Search engines, like Google, give a slight ranking boost to HTTPS websites.
Setting Up Let’s Encrypt:
Install Certbot:
Certbot is the recommended client for Let’s Encrypt. It automates the process of obtaining and renewing certificates.
Obtain the SSL Certificate:
Please make sure to replace example.com and www.example.com with your domain (the same as in the nginx configuration above)
With HTTPS in place, our API is not only functional but also secure. Users can trust that their data is protected, and we adhere to best practices for web security. As the web continues to evolve, ensuring the privacy and security of our users remains paramount.
Maintenance Guide: Essential Commands
Maintaining your setup is crucial for ensuring optimal performance, security, and reliability. Below is a compilation of commands to help you monitor and maintain your installation:
1. Nginx:
- Check Configuration Syntax:
sudo nginx -t
- Reload Configuration (after changes):
sudo systemctl reload nginx
- Check Nginx Status:
sudo systemctl status nginx
- Restart Nginx:
sudo systemctl restart nginx
2. Gunicorn (via Systemd):
- Check Gunicorn Service Status:
sudo systemctl status myapi_app
- Restart Gunicorn Service:Copy code
sudo systemctl restart myapi_app
- View Gunicorn Logs:
journalctl -u myapi_app
3. Certbot (Let’s Encrypt):
- Test Certificate Renewal:
sudo certbot renew --dry-run
- Force Certificate Renewal:
sudo certbot renew --force-renewal
- Obtain Certificate Information:
sudo certbot certificates