Export Database Schema to Code
The atlas schema inspect command can read an existing schema from a live database or any supported schema format, such as
HCL, SQL, or an ORM schema, and generate an equivalent
representation in HCL or SQL. This lets you capture the current state of a schema
as code so it can be stored, modified, or used in migration workflows in a version-controlled environment.
Split Your Schema into Multiple Files
By default, the entire schema is written to standard output as a single file. For larger projects or cases where you want a structured layout, Atlas can automatically organize the output into multiple files and directories. This makes it easy to inspect, edit, and maintain your schema as code in version control systems.
- Quick Commands
- Using Exporters
Use the --format flag with the split and write functions to separate the schema objects into multiple files:
# SQL format.
atlas schema inspect -u '<url>' --format '{{ sql . | split | write }}'
# HCL format.
atlas schema inspect -u '<url>' --format '{{ hcl . | split | write }}'
For repeatable export workflows, configure exporters in your atlas.hcl project file:
exporter "sql" "schema" {
path = "schema"
split_by = object
}
env "prod" {
url = getenv("DB_URL")
export {
schema {
inspect = exporter.sql.schema
}
}
}
atlas schema inspect --env prod --export
For advanced configuration options like custom naming conventions, file extensions, HTTP webhooks, and chaining multiple exporters, see the Configure Schema Exporters section below.
split
The split function splits schema dumps into multiple files and produces a txtar formatted output. The result is then
piped to the write function to write the output to files and directories.
The API for the split function depends on the input format used, either hcl or sql:
- SQL Format
- HCL Format
When used with the sql function, the split function splits the SQL schema dump into multiple files and subdirectories
with different formats based on the scope you inspect - either a database or a specific schema.
Split Database Scope
If you inspect a database scope with more than one schema Atlas will generate a directory for each schema, and subdirectories
for each object type defined in that schema. Database-level objects, such as PostgreSQL extensions, will be generated in
their own directory alongside the schemas directory.
Each object will be defined in its own file within the its type's directory, along with atlas:import directives
pointing to its dependencies.
A main.sql file will also be generated as an "entry point", containing import lines for all files generated by Atlas.
This allows you to point to the entire schema just by referencing the main.sql file (e.g., file://path/to/main.sql).
A typical output might look like:
├── extensions
│ ├── hstore.sql
│ └── citext.sql
├── schemas
│ └── public
│ ├── public.sql
│ ├── tables
│ │ ├── profiles.sql
│ │ └── users.sql
│ ├── functions
│ └── types
└── main.sql
Split Schema Scope
When inspecting a specific schema, Atlas will only generate subdirectories for each object type defined in that schema.
Each object will be defined in its own file within the its type's diretory, along with atlas:import directives
pointing to its dependencies.
In addition, a main.sql file will be generated as an "entry point", containing import lines for all files generated by Atlas.
This allows you to easily point to the entire schema by referencing the main.sql file (e.g., file://path/to/main.sql).
Note that database objects such as schemas and extensions will not be generated. Additionally, CREATE statements will
not be qualified with the schema name, so you can use the generated files in a different schema set by the URL.
A typical output might look like:
├── tables
│ ├── profiles.sql
│ └── users.sql
├── functions
├── types
└── main.sql
The split function takes two optional arguments: strategy and suffix.
The strategy argument states how the output is split. The following strategies are supported:
object(default) - Each schema gets its own directory, a subdirectory for each object type, and a file for each object.schema- Each schema gets its own file.type- Each object type gets its own file.
The suffix argument defines the suffix of the output files, .hcl by default. It is recommended to
use a database specific suffix for better editor plugin support, for example:
| Database | File Suffix |
|---|---|
| MySQL | .my.hcl |
| MariaDB | .ma.hcl |
| PostgreSQL | .pg.hcl |
| SQLite | .lt.hcl |
| ClickHouse | .ch.hcl |
| SQL Server | .ms.hcl |
| Redshift | .rs.hcl |
| Oracle | .oc.hcl |
| Spanner | .sp.hcl |
| Snowflake | .sf.hcl |
| Databricks | .dbx.hcl |
To work with this directory structure, use the hcl_schema data source in your
atlas.hcl project configuration:
data "hcl_schema" "app" {
paths = fileset("schema/**/*.hcl")
}
env "app" {
src = data.hcl_schema.app.url
dev = "docker://mysql/8/example"
}
write
The write function takes one argument: path.
The path argument states the directory where the output files will be written. The path can be relative or absolute.
If no path is specified, the output files will be written to the current directory.
The write function creates the directory if it does not exist.
Examples
- SQL Format
- HCL Format
Default split and write to the current directory:
atlas schema inspect -u '<url>' --format '{{ sql . | split | write }}'
Write to the project/ directory:
atlas schema inspect -u '<url>' --format '{{ sql . | split | write "project/" }}'
Customize indentation to \t and write to the project/ directory:
atlas schema inspect -u '<url>' --format '{{ sql . "\t" | split | write "project/" }}'
Default split and write to the current directory:
atlas schema inspect -u '<url>' --format '{{ hcl . | split | write }}'
Split by object type and write to the schema/ directory for PostgreSQL:
atlas schema inspect -u '<url>' --format '{{ hcl . | split "type" ".pg.hcl" | write "schema/" }}'
Split by schema and write to the schema/ directory for MySQL:
atlas schema inspect -u '<url>' --format '{{ hcl . | split "schema" ".my.hcl" | write "schema/" }}'
Video Tutorial
To see this process in action, check out our video tutorial that covers the entire process using a PostgreSQL schema and SQL-formatted output.
Configure Schema Exporters Atlas Pro
Exporters provide a declarative way to configure how schema inspection results are exported. They support advanced options like custom naming conventions, file extensions, HTTP webhooks, and chaining multiple exporters.
- SQL
- HCL
- HTTP
- Multiple Exporters
The sql exporter writes the schema as SQL files.
| Argument | Description |
|---|---|
path | (Required) Output path. File for single output, directory when using split_by. |
indent | Indentation string. Defaults to " " (two spaces). Use "\t" for tabs. |
split_by | Split strategy: object creates a file per database object. |
naming | File naming when splitting: lower (default), same, upper, or title. |
exporter "sql" "schema" {
path = "schema/sql"
split_by = object
naming = lower
indent = " "
}
env "prod" {
url = getenv("DB_URL")
export {
schema {
inspect = exporter.sql.schema
}
}
}
The hcl exporter writes the schema as HCL files.
| Argument | Description |
|---|---|
path | (Required) Output path. File for single output, directory when using split_by. |
split_by | Split strategy: object, schema, or type. |
naming | File naming when splitting: lower (default), same, upper, or title. |
ext | File extension (defaults to .hcl). Use .pg.hcl, .my.hcl, etc. for editor support. |
exporter "hcl" "schema" {
path = "schema/hcl"
split_by = object
naming = lower
ext = ".pg.hcl"
}
env "prod" {
url = getenv("DB_URL")
export {
schema {
inspect = exporter.hcl.schema
}
}
}
The http exporter sends schema inspection results to an HTTP endpoint.
| Argument | Description |
|---|---|
url | (Required) Request URL (http or https). |
method | (Required) HTTP method: GET, POST, PUT, PATCH. |
body | Static request body. Mutually exclusive with body_template. |
body_template | Go template for dynamic body. Supports {{ sql . }}, {{ json . }}, {{ .MarshalHCL }}. |
headers | Map of HTTP headers. |
request_timeout_ms | Request timeout in milliseconds. |
retry | Retry block with attempts, min_delay_ms, max_delay_ms. |
exporter "http" "webhook" {
url = "https://api.example.com/schemas"
method = "POST"
body_template = "{{ sql . }}"
headers = {
"Content-Type" = "text/plain"
}
}
env "prod" {
url = getenv("DB_URL")
export {
schema {
inspect = exporter.http.webhook
}
}
}
The multi exporter chains multiple exporters together.
| Argument | Description |
|---|---|
exporters | (Required) List of exporter references. |
on_error | Error handling: FAIL (default), CONTINUE, or IGNORE. |
exporter "sql" "schema" {
path = "schema/sql"
split_by = object
}
exporter "hcl" "schema" {
path = "schema/hcl"
split_by = object
}
exporter "multi" "all" {
exporters = [
exporter.sql.schema,
exporter.hcl.schema,
]
}
env "prod" {
url = getenv("DB_URL")
export {
schema {
inspect = exporter.multi.all
}
}
}
For the complete exporter reference, see the Exporters documentation.
Next Steps
After exporting your database schema to code, you can leverage this code in various ways to manage and evolve your database schema effectively. Here are some recommended next steps:
Versioned Migrations
Manage your schema changes through versioned migration files. Use the atlas migrate diff command to generate migrations, atlas migrate apply to apply them, and integrate Atlas into your CI/CD pipeline for safe, auditable deployments.
Declarative Migrations
Manage your schema declaratively by defining the desired state as code, and let Atlas plan and apply the changes using atlas schema apply. To review or approve changes before applying them, use atlas schema plan to pre-plan and approve migrations in advance.