Back to Blog

How to Integrate FastAPI With SQLAlchemy

FastAPI is a high-performance, web framework for making APIs using Python based on typical Python type hints. Meanwhile, SQLAlchemy is a toolkit in Python used for working with SQL and object-relational mapping (ORM). By bringing together these two powerful resources, developers can effectively create strong and scalable web applications.  

In this article, we will discuss integrating FastAPI with SQLAlchemy and touch on key issues such as setting up the development environment, configuring the project, database operations (CRUD), dependency injection handling, Alembic integration for migrations, testing, and best practices. 

An Introduction to FastAPI and SQLAlchemy

FastAPI is among the frameworks used for building APIs using regular Python type hints for validation in Python 3.7+. It's designed to be a framework that's easy for developers to learn and use while still being powerful and efficient. There is also automatic data validation, serialization, and OpenAPI documentation generation. 

SQLAlchemy is a precise SQL toolkit as well as an ORM library for Python. Developers can write code against the database using objects instead of raw SQL queries thanks to SQLAlchemy’s ORM. 

Combining FastAPI with SQLAlchemy provides several advantages: 

  • Type safety: For catching errors early during development, FastAPI has type safety via Python type hints.
  • Developer productivity: Combining tools like FastAPI’s automated documentation generation and ORMs such as SQLAlchemy increases developer productivity.

Although there are several benefits of using SQLAlchemy, you should also know the possible risks:

Risks Involved with SQLAlchemy

Although there are several benefits of using SQLAlchemy, you should also know the possible risks:

  • Performance Overhead: ORMs tend to add performance overhead, especially for complex queries.
  • Steep Learning Curve: The learning curve of SQLAlchemy can be a little overwhelming for beginners, and you might need to spend some time on it.
  • Abstraction Leakage: At times, the ORM abstraction can "leak," meaning that developers actually have to write SQL.
  • Complexity with large projects: As your project grows in proportion, all of ORM's complexities can become difficult to maintain.

Setting Up the Environment

To get started, you need to set up the development environment. Make sure you have Python 3.7+ installed. We will use the pipenv package to create a virtual environment for our project. You can install the pipenv package using the command below. 

pip install pipenv

Create the virtual environment using the following command: 


pipenv shell
pip install fastapi sqlalchemy pytest uvicorn

Configuring FastAPI with SQLAlchemy

Create a new file, main.py, in the root directory of the project and add the following code: 


# file path: main.py
from fastapi import FastAPI, Depends, HTTPException
from sqlalchemy import create_engine, Column, Integer, String
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker, Session

DATABASE_URL = "sqlite:///./test.db"

engine = create_engine(DATABASE_URL, connect_args={"check_same_thread": False})
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
Base = declarative_base()

app = FastAPI()

# Dependency
def get_db():
    db = SessionLocal()
    try:
        yield db
    finally:
        db.close()

# SQLAlchemy models
class Item(Base):
    __tablename__ = "items"
    id = Column(Integer, primary_key=True, index=True)
    name = Column(String, index=True)
    description = Column(String, index=True)

Base.metadata.create_all(bind=engine)

# Pydantic models
from pydantic import BaseModel

class ItemCreate(BaseModel):
    name: str
    description: str

class ItemResponse(BaseModel):
    id: int
    name: str
    description: str

    class Config:
        orm_mode = True

The above code block configures the SQLAlchemy engine, session, and models and defines the Pydantic models for request and response validation. You can replace test.db with the name of your database. 

CRUD Operations with SQLAlchemy

Now that you have the SQLAlchemy engine ready, we will start building the CRUD API. We will use different HTTP methods for each request, including POST, GET, DELETE, and PUT. Let’s get started. 

Create Operation

In the POST request below, we are taking ItemCreate (defined above) as the request body and saving the data in the database. 


@app.post("/items/", response_model=ItemResponse)
def create_item(item: ItemCreate, db: Session = Depends(get_db)):
    db_item = Item(name=item.name, description=item.description)
    db.add(db_item)
    db.commit()
    db.refresh(db_item)
    return db_item

Read Operation

In the code snippet below, we are using item_id as the path parameter to fetch the details of the item. We return an HTTP Exception if we don’t find the item in the database. 


@app.get("/items/{item_id}", response_model=ItemResponse)
def read_item(item_id: int, db: Session = Depends(get_db)):
    db_item = db.query(Item).filter(Item.id == item_id).first()
    if db_item is None:
        raise HTTPException(status_code=404, detail="Item not found")
    return db_item

You can test the endpoint using the following curl request: 


curl -X GET "http://127.0.0.1:8000/items/1"

Update Operation

We will use the PUT HTTP method to update the row in the database based on item_id. We return an HTTP Exception if we don’t find data for the respective item_id


@app.get("/items/{item_id}", response_model=ItemResponse)
def read_item(item_id: int, db: Session = Depends(get_db)):
    db_item = db.query(Item).filter(Item.id == item_id).first()
    if db_item is None:
        raise HTTPException(status_code=404, detail="Item not found")
    return db_item

You can test the endpoint using the following curl request: 


curl -X GET "http://127.0.0.1:8000/items/1"

Delete Operation

In the following code snippet, we are performing a delete operation based on item_id provided by the user using the DELETE HTTP method. 


@app.delete("/items/{item_id}")
def delete_item(item_id: int, db: Session = Depends(get_db)):
    db_item = db.query(Item).filter(Item.id == item_id).first()
    if db_item is None:
        raise HTTPException(status_code=404, detail="Item not found")
    db.delete(db_item)
    db.commit()
    return {"detail": "Item deleted"}

You can test the endpoint using the following curl request: 


curl -X DELETE "http://127.0.0.1:8000/items/1"

Dependency Injection in FastAPI

FastAPI provides a powerful dependency injection system that allows you to easily manage dependencies. Dependency injection is a design pattern that allows you to decouple your code by passing dependencies to your functions and classes instead of instantiating them directly. 

In our example, we used dependency injection to manage the database session: 


# Dependency for getting the DB session
def get_db():
    db = SessionLocal()
    try:
        yield db
    finally:
        db.close()

Handling Migrations with Alembic

Alembic is a lightweight database migration library for SQLAlchemy. It helps manage database schema changes over time, making it easier to handle database migrations in a structured and version-controlled manner. 

If you haven’t already installed Alembic, you can do so using the command below. 


pip install alembic

Once that's done, initialize Alembic in your project. This will create an Alembic directory with a configuration file named alembic.ini: 


alembic init alembic

Now we need to open the alembic.ini file. Open it and set the sqlalchemy.url to use your SQLite database URL: 


# alembic.ini

[alembic]
# path to migration scripts
script_location = alembic

# database URL
sqlalchemy.url = sqlite:///./test.db

After modifying the alembic.ini file, we need to modify the env.py file in the Alembic directory to correctly configure the database URL and import your SQLAlchemy models. You can replace the existing code with the following code in the env.py file. 


# file path: alembic/env.py

from __future__ import with_statement
from alembic import context
from sqlalchemy import engine_from_config, pool
from logging.config import fileConfig
import os

# this is the Alembic Config object, which provides
# access to the values within the .ini file in use.
config = context.config

# Interpret the config file for Python logging.
# This line sets up loggers basically.
fileConfig(config.config_file_name)

# add your model's MetaData object here
# for 'autogenerate' support
# from myapp import mymodel
# target_metadata = mymodel.Base.metadata
from main import Base  # Import your Base from the main application
target_metadata = Base.metadata

# other values from the config, defined by the needs of env.py,
# can be acquired:
# my_important_option = config.get_main_option("my_important_option")
# ... etc.
def run_migrations_offline():
    """Run migrations in 'offline' mode.
    This configures the context with just a URL
    and not an Engine, though an Engine is acceptable
    here as well. By skipping the Engine creation
    we don't even need a DBAPI to be available.

    Calls to context.execute() here emit the given string to the
    script output.
    """
    url = config.get_main_option("sqlalchemy.url")
    context.configure(
        url=url, target_metadata=target_metadata, literal_binds=True
    )

    with context.begin_transaction():
        context.run_migrations()

def run_migrations_online():
    """Run migrations in 'online' mode.
    In this scenario we need to create an Engine
    and associate a connection with the context.
    """
    connectable = engine_from_config(
        config.get_section(config.config_ini_section),
        prefix="sqlalchemy.",
        poolclass=pool.NullPool,
    )

    with connectable.connect() as connection:
        context.configure(
            connection=connection, target_metadata=target_metadata
        )
        with context.begin_transaction():
            context.run_migrations()

if context.is_offline_mode():
    run_migrations_offline()
else:
    run_migrations_online()

Create a Migration Script

With the configuration in place, we can now create a new migration script. Alembic will compare the current state of the database with your models and generate an appropriate migration script. 

alembic revision --autogenerate -m "Initial migration"

Apply the generated migration to update your database schema. 

alembic revision --autogenerate -m "Initial migration"

Whenever you make changes to your database models, you can create and apply new migration scripts to keep your database schema in sync with your application models. 


alembic revision --autogenerate -m "Add new field to Item"
alembic upgrade head

Testing FastAPI Applications with SQLAlchemy

Writing test cases for your applications is crucial as it ensures that new changes don't break any application functionality. Pytest is one of the most common Python libraries used for writing test cases. 

For our FastAPI application, we will use testclient provided by the library itself. We are using a Pytest fixture named client to override the get_db dependency of the application. This is to ensure that the tests use the test database instead of the production database. 


# test_main.py

from fastapi.testclient import TestClient
from main import app, get_db
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from sqlalchemy.ext.declarative import declarative_base
import pytest

DATABASE_URL = "sqlite:///./test.db"

engine = create_engine(DATABASE_URL, connect_args={"check_same_thread": False})
TestingSessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
Base = declarative_base()
Base.metadata.create_all(bind=engine)

@pytest.fixture(scope="module")
def client():
    def override_get_db():
        try:
            db = TestingSessionLocal()
            yield db
        finally:
            db.close()
    app.dependency_overrides[get_db] = override_get_db
    with TestClient(app) as c:
        yield c

def test_create_item(client):
    response = client.post("/items/", json={"name": "Test Item", "description": "This is a test item."})
    assert response.status_code == 200
    data = response.json()
    assert data["name"] == "Test Item"
    assert data["description"] == "This is a test item."

Note: This code is for demonstration and testing purposes only. It is not production ready, and you might want to thoroughly test and implement proper error handling, security measures, and optimizations before deploying to production. 

For the whole code, check out the GitHub Repository

Best Practices

  • Connection pooling: Make sure you use connection pooling to manage your database connections effectively.
  • Optimize queries: Improve performance using the query optimization features that come with SQLAlchemy.
  • Exception handling: Develop a way of handling exceptions that doesn't break your application when something goes wrong.
  • Bulk operations: Always prefer to perform bulk insert and update while interacting with multiple records because it decreases the database round trip.
  • Cache responses: Caching allows us to keep frequently used information in memory, reducing dependency on databases and increasing web response times.
  • Monitor performance: You can use different monitoring tools to analyze how your app performs and determine where the most shortcomings are.

Conclusion

FastAPI integrated with SQLAlchemy provides an effective method of creating web applications with Python. We discussed how to set up the environment, configure FastAPI with SQLAlchemy, perform CRUD operations to implement dependency injection, and manage migrations through Alembic. We finished with testing and best practices. By following these steps, you will be able to design sturdy and scalable web applications that are easier to maintain or modify without interrupting their functionality. 

While using ORMs is flexible and powerful, it's worth considering the level of database abstraction they provide. You may want to add another layer of abstraction by designing a higher-level API layer. This approach prevents exposing direct database operations in your API endpoints. 

For example, constructing service layers that wrap CRUD operations can help avoid overexposure of database details. This can lead to better separation of concerns and, subsequently, more manageable long-term codebases, especially in the context of larger projects.

Alternatively, for developers looking to focus their time on building their application, and not on database programming, Neurelo offers a platform that auto-generates CRUD APIs, uses AI-powered complex query generators, built-in observability, Git-like migration and much more. Try it yourself.

This post was written by Keshav Malik, a highly skilled and enthusiastic security engineer. Keshav has a passion for automation, hacking, and exploring different tools and technologies. With a love for finding innovative solutions to complex problems, Keshav is constantly seeking new opportunities to grow and improve as a professional. He is dedicated to staying ahead of the curve and is always on the lookout for the latest and greatest tools and technologies.