Whenever you want to store data in PythonOnWheels you create a model. Models are simply the definition of your datastructure's attributes, types and constraints. Models are generated for you so you can focus on your datatypes and constraints and leave the boilerplate out.
The principle is: generate, adapt to your needs, go!
For NoSQL it's just like that. For SQL we need to generate the according DB schemas and migrate the DB. But this is also done by PythonOnWheels for you (using the great sqlalchemy / alembic libs ). We also have validation alongside every model automatically since PoW uses ther great cerberus library for models behind the scenes.
You generate a model using the generate_model script. You give it a name and a DB type (sql, tinydb, mongodb, elastic)
python generate_model.py -n todo -t sql
This gives you a pre defined model with some sample data so you can see the syntax and start instantly. Just adapt to this stub to your needs. This is how a basic model looks like after it is generated.
@relation.setup_sql_schema()
class Todo(Base):
schema = {
'title' : { 'type' : 'string', 'maxlength' : 35 },
'text' : { 'type' : 'string' },
"votes" : { "type" : "integer", "default" : 0 }
}
For additional schema options for SQL models read Additional Options for SQL schemas
Attrribute name, type, constraints.
Not only is the syntax easy but actually PythonOnWheels uses cerberus schemas for model definitions. This givs us the abilty to use a proven, well defined format and since Cerberus is a validation library for python, we get model validation for free.
And there is one more thing that makes this very handy. The schemas for all DB types are the same. So it does not matter if you work with a NoSQL Db like mongoDB or a relational DB like postgres.
We will focus on SQL Models in the next sections. I just want to show you here that SQL and NoSQL schemas are the same.
python generate_model.py -n todo -t mongodb
You can see that the schema definition syntax is absolutely the same. The only differences is that you can use more datatypes when you persist your data in a NoSQL DB. An example ist the list type which can be stored directly in NoSQL DBs.
class Todo(MongoModel):
schema = {
'title' : { 'type' : 'string', 'maxlength' : 35 },
'text' : { 'type' : 'string' },
'tags' : { 'type' : 'list', "default" : [] },
"votes" : { "type" : "integer", "default" : 0 }
}
This hands on tutorial focusses on SQL. For NoSQL models you can find a quick intro and an in depth documentation. You can also watch this short asciicinema cast.
Now as we defined out models data attributes, types and constraints we can move on to actually store our data to a DB. For that we obviously need access to a Database. PythonOnWheels supports many SQL DBs like PostgreSQL, MySQL, Oracle, Microsoft SQL Server, SQLite. And NoSQL DBs like MongoDB, TinyDB and Elasticsearch (alpha).
All DB connections you need are defined in the config.py file in your app directory. It contains a database section which looks like this:
database = {
"sql" : {
"loglevel" : logging.INFO,
#
# this is an example for sqlite
#
"type" : "sqlite", #or: "db+driver" e.g. => "postgres+psycopg2" ...
"dbname" : os.path.join(BASE_DIR, 'db.sqlite'), # just a name for non file based DBs
"host" : None,
"port" : None,
"user" : None,
"passwd" : None,
"enabled" : True # switch currently unused
},....
You can see that for every DB you have to define the typical parameters like hostname, port, user, passw and so on. Should be pretty straightforward. For SQLite and TinyDB (which are file based) you just have to link to the DB file. Another nice thing is that you can even use different DBs for different models in parallel in the same app.
In this example we will use SQLite which is preconfigured out of the box so we don't need to change anything and can just go ahead. The preconfigured name for our SQlite DB is in the main app directory:
db.sqlite
Init the SQL environment:
To initialize the SQL environment we have to run the init script.
python init_sqldb_environment.py
You have to execute this only once and only if you use a SQL DB.If you go completely for NoSQL (mongo for example) you don't need to execute the SQL environment and don't need migrations.
We have now defined out todo model, setup the DB connection and are ready to migrate our DB.
Lets look at the schema definition once again:
@relation.setup_sql_schema()
class Todo(Base):
schema = {
'title' : { 'type' : 'string', 'maxlength' : 35 },
'text' : { 'type' : 'string' },
"votes" : { "type" : "integer", "default" : 0 }
}
For additional schema options for SQL models read Additional Options for SQL schemas
In software engineering, schema migration (also database migration, database change management refers to the management of incremental, reversible changes to relational database schemas. A schema migration is performed on a database whenever it is necessary to update or revert that database's schema to some newer or older version.
So what we do with a migration is to create the DB Objects needed to store our todo model. In this case simply add a Table with the right Column Definitions, Constraints, Default values and so on. This seems to be boilerplate work so you guess it:
In PythonOnWheels schema migrations are generated for you automatically.
To generate a migration for our new model we simply run the generate_migration script:
python generate_migration.py -n "added a todo model"
This creates a migration with the given name. The name is just a helper to make it easier to remember what the changes were. Just like a commit message for git.
generating migration: added a todo model
['todo']
setup_schema:todo
INFO [alembic.runtime.migration] Context impl SQLiteImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
INFO [alembic.autogenerate.compare] Detected added table 'todos'
Generating c:\khz\devel\testapp\migrations\versions\76245b5b9b28_added_a_todo_model.py ... done
rev: 76245b5b9b28
path: c:\khz\devel\testapp\migrations\versions\76245b5b9b28_added_a_todo_model.py
The migration script auto detected the added table 'todos' and displays the path to the actual migration file. As expected it's in the migrations/versions directory of you app.
The last thing we need to do is to actually upgrade the DB and apply the changes of the migration to the Database. Of course we do that using a script that automates the process. So the update is done by simply executing:
python update_db.py -d up
Now the Database is ready to use. If we want to undo the changes of this migration simply run update_db.py -d down.
Import the model and create an instance: (Make sure your app in on your PYTHONPATH)
>>> from testapp.models.sql.todo import Todo
setup_schema:todo
>>> t=Todo()
See how the model looks like (Just type t in the interpreter):
>>> t
id : None (primary key)
created_at : None
last_updated : None
title :
text :
votes : 0
Add a todo.title and save the model to the DB:
>>> t.title="a first todo"
>>> t.upsert()
Lets print our model again:
>>> t
id : 1 (primary key)
created_at : 2019-01-04 15:55:48
last_updated : 2019-01-04 15:55:48
title : a first todo
text :
votes : 0
You can see that quite a few things changed here. We don't only have a title but also a created_at, last_updated and an id. These attributes are created and tracked automatically by PythonOnWheels. You can also see that our votes attribute has the default value we specified in the schema.
"votes" : { "type" : "integer", "default" : 0 }
And you can see that the model persisted the data to the DB.
Let's update the model data and upsert again:
>>> t.text="something todo"
>>> t.votes=99
>>> t.upsert()
>>> t
id : 1 (primary key)
created_at : 2019-01-04 15:55:48
last_updated : 2019-01-04 16:21:01
title : a first todo
text : something todo
votes : 99
And you can see that the model is updated accordingly:
All PythonOnWheels models have methods to execute DB queries. The method names should be mostly self explaining:
Since we use sqlalchemy behind the scene the find method can take all the query criteria allowed by sqlalchemy queries. Find_first, find_all just return the data (all rows -> cursor) or the first row. Find_from_statement can take a standard SQL SELECT to be executed raw.
>>> for elem in range(1,10):
... t=Todo()
... t.title="Todo #" + str(elem)
... t.text="a dummy todo"
... t.upsert()
OK, cool. Worked. You can see that there are 9 additional rows now.
>>> res=t.find(Todo.title=="Todo #1")
>>> res
<sqlalchemy.orm.query.Query object at 0x000002C24287BEB8>
>>> res[0]
id : 2 (primary key)
created_at : 2019-01-04 16:31:06
last_updated : 2019-01-04 16:31:06
title : Todo #1
text : a dummy todo
votes : 0
>>> res=t.find_all()
Again, res is a sqlalchemy.orm.query.Query instance. Let's iterate over it and print the title:
>>> for elem in res:
... print(elem.title)
...
a first todo
Todo #1
Todo #2
Todo #3
Todo #4
Todo #5
Todo #6
Todo #7
Todo #8
Todo #9
Now we want to find all todos with titles starting with "Todo". Since we can use sqlalchemy query syntax we can just add a like filter to our query like this:
res=t.find(Todo.title.like("Todo%"))
If that isn't easy I don't know ;) Let's look if it works:
>>> res=t.find(Todo.title.like("Todo%"))
>>> for elem in res:
... print(elem.title)
...
Todo #1
Todo #2
Todo #3
Todo #4
Todo #5
Todo #6
Todo #7
Todo #8
Todo #9
So these are already the basics of querying for now. You can find more info on filter operators here. But for simple apps most of the time you will refer to simple queries, usually looking for an id or probably uuid you added to your model.
Now we want to update the todo with title="Todo #1" and change:
First let's query for that row again:
res=t.find(Todo.title=="Todo #1")
Now we assign the result to t. (Just to give it a handier name. We actually could have worked with res[0])
>>> t=res[0]
>>> t
id : 2 (primary key)
created_at : 2019-01-04 16:31:06
last_updated : 2019-01-04 16:31:06
title : Todo #1
text : a dummy todo
votes : 0
Now change the attributes as defined:
>>> t.votes=11
>>> t.text="don't forget to delete all the dummy models"
>>> t.upsert()
The upsert() method will automatically check if the model already is in the DB and execute an SQL update (instead of SQL INSERT).
>>> t
id : 2 (primary key)
created_at : 2019-01-04 16:31:06
last_updated : 2019-01-04 17:30:53
title : Todo #1
text : don't forget to delete all the dummy models
votes : 11
Let's double check the DB using the DB Viewer of your choice:
Nice. This worked as well. So let's really delete all the dummy models.
To finally delete the models we will query all models with "dummy" in their todo.text and simply call the models delete method.
res=t.find(Todo.text.like("%dummy%"))
We already know how to iterate the result so we just delete the models (rows) directly.
>>> for elem in res:
... elem.delete()
Let's Look at the Database to check:
Cool. Worked. All dummy models are deleted.
So that's it for SQL Models.
generate_model -n <modelname> -t dbtype
python generate_migration.py -n "added a todo model"
python update_db.py -d up
t.upsert()
res=t.find_all()
res=t.find(Todo.title.like("Todo%"))
>>> for elem in res:
... elem.delete()
Hope you enjoy PythonOnWheels. If you have any questions or remarks or errors you can open an issue on github or tweet to @pythononwheels.