keystone.openstack.common.db.sqlalchemy package

Submodules

keystone.openstack.common.db.sqlalchemy.migration module

keystone.openstack.common.db.sqlalchemy.migration.db_sync(engine, abs_path, version=None, init_version=0, sanity_check=True)[source]

Upgrade or downgrade a database.

Function runs the upgrade() or downgrade() functions in change scripts.

Parameters:
  • engine – SQLAlchemy engine instance for a given database
  • abs_path – Absolute path to migrate repository.
  • version – Database will upgrade/downgrade until this version. If None - database will update to the latest available version.
  • init_version – Initial database version
  • sanity_check – Require schema sanity checking for all tables
keystone.openstack.common.db.sqlalchemy.migration.db_version(engine, abs_path, init_version)[source]

Show the current version of the repository.

Parameters:
  • engine – SQLAlchemy engine instance for a given database
  • abs_path – Absolute path to migrate repository
  • version – Initial database version
keystone.openstack.common.db.sqlalchemy.migration.db_version_control(engine, abs_path, version=None)[source]

Mark a database as under this repository’s version control.

Once a database is under version control, schema changes should only be done via change scripts in this repository.

Parameters:
  • engine – SQLAlchemy engine instance for a given database
  • abs_path – Absolute path to migrate repository
  • version – Initial database version
keystone.openstack.common.db.sqlalchemy.migration.patch_migrate()[source]

A workaround for SQLite’s inability to alter things

SQLite abilities to alter tables are very limited (please read http://www.sqlite.org/lang_altertable.html for more details). E. g. one can’t drop a column or a constraint in SQLite. The workaround for this is to recreate the original table omitting the corresponding constraint (or column).

sqlalchemy-migrate library has recreate_table() method that implements this workaround, but it does it wrong:

  • information about unique constraints of a table is not retrieved. So if you have a table with one unique constraint and a migration adding another one you will end up with a table that has only the latter unique constraint, and the former will be lost
  • dropping of unique constraints is not supported at all

The proper way to fix this is to provide a pull-request to sqlalchemy-migrate, but the project seems to be dead. So we can go on with monkey-patching of the lib at least for now.

keystone.openstack.common.db.sqlalchemy.models module

SQLAlchemy models.

class keystone.openstack.common.db.sqlalchemy.models.ModelBase[source]

Bases: six.Iterator

Base class for models.

get(key, default=None)[source]
iteritems()[source]

Make the model object behave like a dict.

Includes attributes from joins.

next()[source]
save(session)[source]

Save this object.

update(values)[source]

Make the model object behave like a dict.

class keystone.openstack.common.db.sqlalchemy.models.SoftDeleteMixin[source]

Bases: object

deleted = Column(None, Integer(), table=None, default=ColumnDefault(0))
deleted_at = Column(None, DateTime(), table=None)
soft_delete(session)[source]

Mark this object as deleted.

class keystone.openstack.common.db.sqlalchemy.models.TimestampMixin[source]

Bases: object

created_at = Column(None, DateTime(), table=None, default=ColumnDefault(<function <lambda> at 0x7f7a193b2488>))
updated_at = Column(None, DateTime(), table=None, onupdate=ColumnDefault(<function <lambda> at 0x7f7a193b2578>))

keystone.openstack.common.db.sqlalchemy.provision module

Provision test environment for specific DB backends

keystone.openstack.common.db.sqlalchemy.provision.create_database(engine)[source]

Provide temporary user and database for each particular test.

keystone.openstack.common.db.sqlalchemy.provision.drop_database(admin_engine, current_uri)[source]

Drop temporary database and user after each particular test.

keystone.openstack.common.db.sqlalchemy.provision.get_engine(uri)[source]

Engine creation

Call the function without arguments to get admin connection. Admin connection required to create temporary user and database for each particular test. Otherwise use existing connection to recreate connection to the temporary database.

keystone.openstack.common.db.sqlalchemy.provision.main()[source]

Controller to handle commands

::create: Create test user and database with random names. ::drop: Drop user and database created by previous command.

keystone.openstack.common.db.sqlalchemy.session module

Session Handling for SQLAlchemy backend.

Recommended ways to use sessions within this framework:

  • Don’t use them explicitly; this is like running with AUTOCOMMIT=1. model_query() will implicitly use a session when called without one supplied. This is the ideal situation because it will allow queries to be automatically retried if the database connection is interrupted.

    Note

    Automatic retry will be enabled in a future patch.

    It is generally fine to issue several queries in a row like this. Even though they may be run in separate transactions and/or separate sessions, each one will see the data from the prior calls. If needed, undo- or rollback-like functionality should be handled at a logical level. For an example, look at the code around quotas and reservation_rollback().

    Examples:

    def get_foo(context, foo):
        return (model_query(context, models.Foo).
                filter_by(foo=foo).
                first())
    
    def update_foo(context, id, newfoo):
        (model_query(context, models.Foo).
                filter_by(id=id).
                update({'foo': newfoo}))
    
    def create_foo(context, values):
        foo_ref = models.Foo()
        foo_ref.update(values)
        foo_ref.save()
        return foo_ref
    
  • Within the scope of a single method, keep all the reads and writes within the context managed by a single session. In this way, the session’s __exit__ handler will take care of calling flush() and commit() for you. If using this approach, you should not explicitly call flush() or commit(). Any error within the context of the session will cause the session to emit a ROLLBACK. Database errors like IntegrityError will be raised in session‘s __exit__ handler, and any try/except within the context managed by session will not be triggered. And catching other non-database errors in the session will not trigger the ROLLBACK, so exception handlers should always be outside the session, unless the developer wants to do a partial commit on purpose. If the connection is dropped before this is possible, the database will implicitly roll back the transaction.

    Note

    Statements in the session scope will not be automatically retried.

    If you create models within the session, they need to be added, but you do not need to call model.save():

    def create_many_foo(context, foos):
        session = sessionmaker()
        with session.begin():
            for foo in foos:
                foo_ref = models.Foo()
                foo_ref.update(foo)
                session.add(foo_ref)
    
    def update_bar(context, foo_id, newbar):
        session = sessionmaker()
        with session.begin():
            foo_ref = (model_query(context, models.Foo, session).
                        filter_by(id=foo_id).
                        first())
            (model_query(context, models.Bar, session).
                        filter_by(id=foo_ref['bar_id']).
                        update({'bar': newbar}))
    

    Note

    update_bar is a trivially simple example of using with session.begin. Whereas create_many_foo is a good example of when a transaction is needed, it is always best to use as few queries as possible.

    The two queries in update_bar can be better expressed using a single query which avoids the need for an explicit transaction. It can be expressed like so:

    def update_bar(context, foo_id, newbar):
        subq = (model_query(context, models.Foo.id).
                filter_by(id=foo_id).
                limit(1).
                subquery())
        (model_query(context, models.Bar).
                filter_by(id=subq.as_scalar()).
                update({'bar': newbar}))
    

    For reference, this emits approximately the following SQL statement:

    UPDATE bar SET bar = ${newbar}
        WHERE id=(SELECT bar_id FROM foo WHERE id = ${foo_id} LIMIT 1);
    

    Note

    create_duplicate_foo is a trivially simple example of catching an exception while using with session.begin. Here create two duplicate instances with same primary key, must catch the exception out of context managed by a single session:

    def create_duplicate_foo(context):
        foo1 = models.Foo()
        foo2 = models.Foo()
        foo1.id = foo2.id = 1
        session = sessionmaker()
        try:
            with session.begin():
                session.add(foo1)
                session.add(foo2)
        except exception.DBDuplicateEntry as e:
            handle_error(e)
    
  • Passing an active session between methods. Sessions should only be passed to private methods. The private method must use a subtransaction; otherwise SQLAlchemy will throw an error when you call session.begin() on an existing transaction. Public methods should not accept a session parameter and should not be involved in sessions within the caller’s scope.

    Note that this incurs more overhead in SQLAlchemy than the above means due to nesting transactions, and it is not possible to implicitly retry failed database operations when using this approach.

    This also makes code somewhat more difficult to read and debug, because a single database transaction spans more than one method. Error handling becomes less clear in this situation. When this is needed for code clarity, it should be clearly documented.

    def myfunc(foo):
        session = sessionmaker()
        with session.begin():
            # do some database things
            bar = _private_func(foo, session)
        return bar
    
    def _private_func(foo, session=None):
        if not session:
            session = sessionmaker()
        with session.begin(subtransaction=True):
            # do some other database things
        return bar
    

There are some things which it is best to avoid:

  • Don’t keep a transaction open any longer than necessary.

    This means that your with session.begin() block should be as short as possible, while still containing all the related calls for that transaction.

  • Avoid with_lockmode('UPDATE') when possible.

    In MySQL/InnoDB, when a SELECT ... FOR UPDATE query does not match any rows, it will take a gap-lock. This is a form of write-lock on the “gap” where no rows exist, and prevents any other writes to that space. This can effectively prevent any INSERT into a table by locking the gap at the end of the index. Similar problems will occur if the SELECT FOR UPDATE has an overly broad WHERE clause, or doesn’t properly use an index.

    One idea proposed at ODS Fall ‘12 was to use a normal SELECT to test the number of rows matching a query, and if only one row is returned, then issue the SELECT FOR UPDATE.

    The better long-term solution is to use INSERT .. ON DUPLICATE KEY UPDATE. However, this can not be done until the “deleted” columns are removed and proper UNIQUE constraints are added to the tables.

Enabling soft deletes:

  • To use/enable soft-deletes, the SoftDeleteMixin must be added to your model class. For example:

    class NovaBase(models.SoftDeleteMixin, models.ModelBase):
        pass
    

Efficient use of soft deletes:

  • There are two possible ways to mark a record as deleted: model.soft_delete() and query.soft_delete().

    The model.soft_delete() method works with a single already-fetched entry. query.soft_delete() makes only one db request for all entries that correspond to the query.

  • In almost all cases you should use query.soft_delete(). Some examples:

    def soft_delete_bar():
        count = model_query(BarModel).find(some_condition).soft_delete()
        if count == 0:
            raise Exception("0 entries were soft deleted")
    
    def complex_soft_delete_with_synchronization_bar(session=None):
        if session is None:
            session = sessionmaker()
        with session.begin(subtransactions=True):
            count = (model_query(BarModel).
                        find(some_condition).
                        soft_delete(synchronize_session=True))
                        # Here synchronize_session is required, because we
                        # don't know what is going on in outer session.
            if count == 0:
                raise Exception("0 entries were soft deleted")
    
  • There is only one situation where model.soft_delete() is appropriate: when you fetch a single record, work with it, and mark it as deleted in the same transaction.

    def soft_delete_bar_model():
        session = sessionmaker()
        with session.begin():
            bar_ref = model_query(BarModel).find(some_condition).first()
            # Work with bar_ref
            bar_ref.soft_delete(session=session)
    

    However, if you need to work with all entries that correspond to query and then soft delete them you should use the query.soft_delete() method:

    def soft_delete_multi_models():
        session = sessionmaker()
        with session.begin():
            query = (model_query(BarModel, session=session).
                        find(some_condition))
            model_refs = query.all()
            # Work with model_refs
            query.soft_delete(synchronize_session=False)
            # synchronize_session=False should be set if there is no outer
            # session and these entries are not used after this.
    

    When working with many rows, it is very important to use query.soft_delete, which issues a single query. Using model.soft_delete(), as in the following example, is very inefficient.

    for bar_ref in bar_refs:
        bar_ref.soft_delete(session=session)
    # This will produce count(bar_refs) db requests.
    
class keystone.openstack.common.db.sqlalchemy.session.EngineFacade(sql_connection, sqlite_fk=False, autocommit=True, expire_on_commit=False, **kwargs)[source]

Bases: object

A helper class for removing of global engine instances from keystone.db.

As a library, keystone.db can’t decide where to store/when to create engine and sessionmaker instances, so this must be left for a target application.

On the other hand, in order to simplify the adoption of keystone.db changes, we’ll provide a helper class, which creates engine and sessionmaker on its instantiation and provides get_engine()/get_session() methods that are compatible with corresponding utility functions that currently exist in target projects, e.g. in Nova.

engine/sessionmaker instances will still be global (and they are meant to be global), but they will be stored in the app context, rather that in the keystone.db context.

Note: using of this helper is completely optional and you are encouraged to integrate engine/sessionmaker instances into your apps any way you like (e.g. one might want to bind a session to a request context). Two important things to remember:

  1. An Engine instance is effectively a pool of DB connections, so it’s meant to be shared (and it’s thread-safe).
  2. A Session instance is not meant to be shared and represents a DB transactional context (i.e. it’s not thread-safe). sessionmaker is a factory of sessions.
classmethod from_config(connection_string, conf, sqlite_fk=False, autocommit=True, expire_on_commit=False)[source]

Initialize EngineFacade using oslo.config config instance options.

Parameters:
  • connection_string (string) – SQLAlchemy connection string
  • conf (oslo.config.cfg.ConfigOpts) – oslo.config config instance
  • sqlite_fk (bool) – enable foreign keys in SQLite
  • autocommit (bool) – use autocommit mode for created Session instances
  • expire_on_commit (bool) – expire session objects on commit
get_engine()[source]

Get the engine instance (note, that it’s shared).

get_session(**kwargs)[source]

Get a Session instance.

If passed, keyword arguments values override the ones used when the sessionmaker instance was created.

Parameters:
  • autocommit (bool) – use autocommit mode for created Session instances
  • expire_on_commit (bool) – expire session objects on commit
class keystone.openstack.common.db.sqlalchemy.session.Query(entities, session=None)[source]

Bases: sqlalchemy.orm.query.Query

Subclass of sqlalchemy.query with soft_delete() method.

soft_delete(synchronize_session='evaluate')[source]
class keystone.openstack.common.db.sqlalchemy.session.Session(bind=None, autoflush=True, expire_on_commit=True, _enable_transaction_accounting=True, autocommit=False, twophase=False, weak_identity_map=True, binds=None, extension=None, info=None, query_cls=<class 'sqlalchemy.orm.query.Query'>)[source]

Bases: sqlalchemy.orm.session.Session

Custom Session class to avoid SqlAlchemy Session monkey patching.

execute(*args, **kwargs)[source]
flush(*args, **kwargs)[source]
query(*args, **kwargs)[source]
class keystone.openstack.common.db.sqlalchemy.session.SqliteForeignKeysListener[source]

Bases: sqlalchemy.interfaces.PoolListener

Ensures that the foreign key constraints are enforced in SQLite.

The foreign key constraints are disabled by default in SQLite, so the foreign key constraints will be enabled here for every database connection

connect(dbapi_con, con_record)[source]
keystone.openstack.common.db.sqlalchemy.session.create_engine(sql_connection, sqlite_fk=False, mysql_sql_mode=None, idle_timeout=3600, connection_debug=0, max_pool_size=None, max_overflow=None, pool_timeout=None, sqlite_synchronous=True, connection_trace=False, max_retries=10, retry_interval=10)[source]

Return a new SQLAlchemy engine.

keystone.openstack.common.db.sqlalchemy.session.get_maker(engine, autocommit=True, expire_on_commit=False)[source]

Return a SQLAlchemy sessionmaker using the given engine.

keystone.openstack.common.db.sqlalchemy.test_base module

class keystone.openstack.common.db.sqlalchemy.test_base.DbFixture(test)[source]

Bases: fixtures.fixture.Fixture

Basic database fixture.

Allows to run tests on various db backends, such as SQLite, MySQL and PostgreSQL. By default use sqlite backend. To override default backend uri set env variable OS_TEST_DBAPI_CONNECTION with database admin credentials for specific backend.

setUp()[source]
class keystone.openstack.common.db.sqlalchemy.test_base.DbTestCase(*args, **kwargs)[source]

Bases: keystone.openstack.common.test.BaseTestCase

Base class for testing of DB code.

Using DbFixture. Intended to be the main database test case to use all the tests on a given backend with user defined uri. Backend specific tests should be decorated with backend_specific decorator.

FIXTURE

alias of DbFixture

setUp()[source]
class keystone.openstack.common.db.sqlalchemy.test_base.MySQLOpportunisticFixture(test)[source]

Bases: keystone.openstack.common.db.sqlalchemy.test_base.OpportunisticFixture

DRIVER = 'mysql'
class keystone.openstack.common.db.sqlalchemy.test_base.MySQLOpportunisticTestCase(*args, **kwargs)[source]

Bases: keystone.openstack.common.db.sqlalchemy.test_base.OpportunisticTestCase

FIXTURE

alias of MySQLOpportunisticFixture

class keystone.openstack.common.db.sqlalchemy.test_base.OpportunisticFixture(test)[source]

Bases: keystone.openstack.common.db.sqlalchemy.test_base.DbFixture

Base fixture to use default CI databases.

The databases exist in OpenStack CI infrastructure. But for the correct functioning in local environment the databases must be created manually.

DBNAME = 'openstack_citest'
DRIVER
PASSWORD = 'openstack_citest'
USERNAME = 'openstack_citest'
class keystone.openstack.common.db.sqlalchemy.test_base.OpportunisticTestCase(*args, **kwargs)[source]

Bases: keystone.openstack.common.db.sqlalchemy.test_base.DbTestCase

Base test case to use default CI databases.

The subclasses of the test case are running only when openstack_citest database is available otherwise a tests will be skipped.

FIXTURE
setUp()[source]
class keystone.openstack.common.db.sqlalchemy.test_base.PostgreSQLOpportunisticFixture(test)[source]

Bases: keystone.openstack.common.db.sqlalchemy.test_base.OpportunisticFixture

DRIVER = 'postgresql'
class keystone.openstack.common.db.sqlalchemy.test_base.PostgreSQLOpportunisticTestCase(*args, **kwargs)[source]

Bases: keystone.openstack.common.db.sqlalchemy.test_base.OpportunisticTestCase

FIXTURE

alias of PostgreSQLOpportunisticFixture

keystone.openstack.common.db.sqlalchemy.test_base.backend_specific(*dialects)[source]

Decorator to skip backend specific tests on inappropriate engines.

::dialects: list of dialects names under which the test will be launched.

keystone.openstack.common.db.sqlalchemy.test_migrations module

class keystone.openstack.common.db.sqlalchemy.test_migrations.BaseMigrationTestCase(*args, **kwargs)[source]

Bases: keystone.openstack.common.test.BaseTestCase

Base class fort testing of migration utils.

execute_cmd(cmd=None)[source]
setUp()[source]
tearDown()[source]
class keystone.openstack.common.db.sqlalchemy.test_migrations.WalkVersionsMixin[source]

Bases: object

keystone.openstack.common.db.sqlalchemy.utils module

exception keystone.openstack.common.db.sqlalchemy.utils.ColumnError[source]

Bases: exceptions.Exception

Error raised when no column or an invalid column is found.

class keystone.openstack.common.db.sqlalchemy.utils.InsertFromSelect(table, select)[source]

Bases: sqlalchemy.sql.dml.UpdateBase

Form the base for INSERT INTO table (SELECT ... ) statement.

exception keystone.openstack.common.db.sqlalchemy.utils.InvalidSortKey[source]

Bases: exceptions.Exception

message = u'Sort key supplied was not valid.'
keystone.openstack.common.db.sqlalchemy.utils.change_deleted_column_type_to_boolean(migrate_engine, table_name, **col_name_col_instance)[source]
keystone.openstack.common.db.sqlalchemy.utils.change_deleted_column_type_to_id_type(migrate_engine, table_name, **col_name_col_instance)[source]
keystone.openstack.common.db.sqlalchemy.utils.drop_old_duplicate_entries_from_table(migrate_engine, table_name, use_soft_delete, *uc_column_names)[source]

Drop all old rows having the same values for columns in uc_columns.

This method drop (or mark ad deleted if use_soft_delete is True) old duplicate rows form table with name table_name.

Parameters:
  • migrate_engine – Sqlalchemy engine
  • table_name – Table with duplicates
  • use_soft_delete – If True - values will be marked as deleted, if False - values will be removed from table
  • uc_column_names – Unique constraint columns
keystone.openstack.common.db.sqlalchemy.utils.drop_unique_constraint(migrate_engine, table_name, uc_name, *columns, **col_name_col_instance)[source]

Drop unique constraint from table.

DEPRECATED: this function is deprecated and will be removed from keystone.db in a few releases. Please use UniqueConstraint.drop() method directly for sqlalchemy-migrate migration scripts.

This method drops UC from table and works for mysql, postgresql and sqlite. In mysql and postgresql we are able to use “alter table” construction. Sqlalchemy doesn’t support some sqlite column types and replaces their type with NullType in metadata. We process these columns and replace NullType with the correct column type.

Parameters:
  • migrate_engine – sqlalchemy engine
  • table_name – name of table that contains uniq constraint.
  • uc_name – name of uniq constraint that will be dropped.
  • columns – columns that are in uniq constraint.
  • col_name_col_instance – contains pair column_name=column_instance. column_instance is instance of Column. These params are required only for columns that have unsupported types by sqlite. For example BigInteger.
keystone.openstack.common.db.sqlalchemy.utils.get_connect_string(backend, database, user=None, passwd=None)[source]

Get database connection

Try to get a connection with a very specific set of values, if we get these then we’ll run the tests, otherwise they are skipped

keystone.openstack.common.db.sqlalchemy.utils.get_db_connection_info(conn_pieces)[source]
keystone.openstack.common.db.sqlalchemy.utils.get_table(engine, name)[source]

Returns an sqlalchemy table dynamically from db.

Needed because the models don’t work for us in migrations as models will be far out of sync with the current data.

keystone.openstack.common.db.sqlalchemy.utils.is_backend_avail(backend, database, user=None, passwd=None)[source]
keystone.openstack.common.db.sqlalchemy.utils.model_query(context, model, session, args=None, project_only=False, read_deleted=None)[source]

Query helper that accounts for context’s read_deleted field.

Parameters:
  • context – context to query under
  • model (models.ModelBase) – Model to query. Must be a subclass of ModelBase.
  • session (sqlalchemy.orm.session.Session) – The session to use.
  • args (tuple) – Arguments to query. If None - model is used.
  • project_only (bool) – If present and context is user-type, then restrict query to match the context’s project_id. If set to ‘allow_none’, restriction includes project_id = None.
  • read_deleted (bool) – If present, overrides context’s read_deleted field.

Usage:

..code:: python

result = (utils.model_query(context, models.Instance, session=session)
.filter_by(uuid=instance_uuid) .all())
query = utils.model_query(
context, Node, session=session, args=(func.count(Node.id), func.sum(Node.ram)) ).filter_by(project_id=project_id)
keystone.openstack.common.db.sqlalchemy.utils.or_(*clauses)

Produce a conjunction of expressions joined by OR.

E.g.:

from sqlalchemy import or_

stmt = select([users_table]).where(
                or_(
                    users_table.c.name == 'wendy',
                    users_table.c.name == 'jack'
                )
            )

The or_() conjunction is also available using the Python | operator (though note that compound expressions need to be parenthesized in order to function with Python operator precedence behavior):

stmt = select([users_table]).where(
                (users_table.c.name == 'wendy') |
                (users_table.c.name == 'jack')
            )

See also

and_()

keystone.openstack.common.db.sqlalchemy.utils.paginate_query(query, model, limit, sort_keys, marker=None, sort_dir=None, sort_dirs=None)[source]

Returns a query with sorting / pagination criteria added.

Pagination works by requiring a unique sort_key, specified by sort_keys. (If sort_keys is not unique, then we risk looping through values.) We use the last row in the previous page as the ‘marker’ for pagination. So we must return values that follow the passed marker in the order. With a single-valued sort_key, this would be easy: sort_key > X. With a compound-values sort_key, (k1, k2, k3) we must do this to repeat the lexicographical ordering: (k1 > X1) or (k1 == X1 && k2 > X2) or (k1 == X1 && k2 == X2 && k3 > X3)

We also have to cope with different sort_directions.

Typically, the id of the last row is used as the client-facing pagination marker, then the actual marker object must be fetched from the db and passed in to us as marker.

Parameters:
  • query – the query object to which we should add paging/sorting
  • model – the ORM model class
  • limit – maximum number of items to return
  • sort_keys – array of attributes by which results should be sorted
  • marker – the last item of the previous page; we returns the next results after this value.
  • sort_dir – direction in which results should be sorted (asc, desc)
  • sort_dirs – per-column array of sort_dirs, corresponding to sort_keys
Return type:

sqlalchemy.orm.query.Query

Returns:

The query with sorting/pagination added.

keystone.openstack.common.db.sqlalchemy.utils.sanitize_db_url(url)[source]
keystone.openstack.common.db.sqlalchemy.utils.visit_insert_from_select(element, compiler, **kw)[source]

Form the INSERT INTO table (SELECT ... ) statement.

Module contents