psycopg2.extras – Miscellaneous goodies for Psycopg 2¶
This module is a generic place used to hold little helper functions and classes until a better place in the distribution is found.
Connection and cursor subclasses¶
A few objects that change the way the results are returned by the cursor or
modify the object behavior in some other way. Typically cursor subclasses
are passed as cursor_factory argument to connect() so that the
connection’s cursor() method will generate objects of this
class. Alternatively a cursor subclass can be used one-off by passing it
as the cursor_factory argument to the cursor() method.
If you want to use a connection subclass you can pass it as the
connection_factory argument of the connect() function.
Dictionary-like cursor¶
The dict cursors allow to access to the retrieved records using an interface similar to the Python dictionaries instead of the tuples.
>>> dict_cur = conn.cursor(cursor_factory=psycopg2.extras.DictCursor)
>>> dict_cur.execute("INSERT INTO test (num, data) VALUES(%s, %s)",
... (100, "abc'def"))
>>> dict_cur.execute("SELECT * FROM test")
>>> rec = dict_cur.fetchone()
>>> rec['id']
1
>>> rec['num']
100
>>> rec['data']
"abc'def"
The records still support indexing as the original tuple:
>>> rec[2]
"abc'def"
-
class
psycopg2.extras.DictCursor(*args, **kwargs)¶ A cursor that keeps a list of column name -> index mappings.
-
class
psycopg2.extras.DictConnection¶ A connection that uses
DictCursorautomatically.Note
Not very useful since Psycopg 2.5: you can use
psycopg2.connect(dsn, cursor_factory=DictCursor)instead ofDictConnection.
-
class
psycopg2.extras.DictRow(cursor)¶ A row object that allow by-column-name access to data.
Real dictionary cursor¶
-
class
psycopg2.extras.RealDictCursor(*args, **kwargs)¶ A cursor that uses a real dict as the base type for rows.
Note that this cursor is extremely specialized and does not allow the normal access (using integer indices) to fetched data. If you need to access database rows both as a dictionary and a list, then use the generic
DictCursorinstead ofRealDictCursor.
-
class
psycopg2.extras.RealDictConnection¶ A connection that uses
RealDictCursorautomatically.Note
Not very useful since Psycopg 2.5: you can use
psycopg2.connect(dsn, cursor_factory=RealDictCursor)instead ofRealDictConnection.
-
class
psycopg2.extras.RealDictRow(cursor)¶ A
dictsubclass representing a data record.
namedtuple cursor¶
New in version 2.3.
These objects require collections.namedtuple() to be found, so it is
available out-of-the-box only from Python 2.6. Anyway, the namedtuple
implementation is compatible with previous Python versions, so all you
have to do is to download it and make it available where we
expect it to be...
from somewhere import namedtuple
import collections
collections.namedtuple = namedtuple
from psycopg.extras import NamedTupleConnection
# ...
-
class
psycopg2.extras.NamedTupleCursor¶ A cursor that generates results as
namedtuple.fetch*()methods will return named tuples instead of regular tuples, so their elements can be accessed both as regular numeric items as well as attributes.>>> nt_cur = conn.cursor(cursor_factory=psycopg2.extras.NamedTupleCursor) >>> rec = nt_cur.fetchone() >>> rec Record(id=1, num=100, data="abc'def") >>> rec[1] 100 >>> rec.data "abc'def"
-
class
psycopg2.extras.NamedTupleConnection¶ A connection that uses
NamedTupleCursorautomatically.Note
Not very useful since Psycopg 2.5: you can use
psycopg2.connect(dsn, cursor_factory=NamedTupleCursor)instead ofNamedTupleConnection.
Logging cursor¶
-
class
psycopg2.extras.LoggingConnection¶ A connection that logs all queries to a file or logger object.
-
filter(msg, curs)¶ Filter the query before logging it.
This is the method to overwrite to filter unwanted queries out of the log or to add some extra data to the output. The default implementation just does nothing.
-
initialize(logobj)¶ Initialize the connection to log to
logobj.The
logobjparameter can be an open file object or a Logger instance from the standard logging module.
-
-
class
psycopg2.extras.LoggingCursor¶ A cursor that logs queries using its connection logging facilities.
-
class
psycopg2.extras.MinTimeLoggingConnection¶ A connection that logs queries based on execution time.
This is just an example of how to sub-class
LoggingConnectionto provide some extra filtering for the logged queries. Both theinitialize()andfilter()methods are overwritten to make sure that only queries executing for more thanmintimems are logged.Note that this connection uses the specialized cursor
MinTimeLoggingCursor.
-
class
psycopg2.extras.MinTimeLoggingCursor¶ The cursor sub-class companion to
MinTimeLoggingConnection.
Additional data types¶
JSON adaptation¶
New in version 2.5.
Changed in version 2.5.4: added jsonb support. In previous versions jsonb values are returned
as strings. See the FAQ for a workaround.
Psycopg can adapt Python objects to and from the PostgreSQL json and
jsonb types. With PostgreSQL 9.2 and following versions adaptation is
available out-of-the-box. To use JSON data with previous database versions
(either with the 9.1 json extension, but even if you want to convert text
fields to JSON) you can use the register_json() function.
The Python library used by default to convert Python objects to JSON and to
parse data from the database depends on the language version: with Python 2.6
and following the json module from the standard library is used;
with previous versions the simplejson module is used if available. Note
that the last simplejson version supporting Python 2.4 is the 2.0.9.
In order to pass a Python object to the database as query argument you can use
the Json adapter:
curs.execute("insert into mytable (jsondata) values (%s)",
[Json({'a': 100})])
Reading from the database, json and jsonb values will be automatically
converted to Python objects.
Note
If you are using the PostgreSQL json data type but you want to read
it as string in Python instead of having it parsed, your can either cast
the column to text in the query (it is an efficient operation, that
doesn’t involve a copy):
cur.execute("select jsondata::text from mytable")
or you can register a no-op loads() function with
register_default_json():
psycopg2.extras.register_default_json(loads=lambda x: x)
Note
You can use register_adapter() to adapt any Python
dictionary to JSON, either registering Json or any subclass or factory
creating a compatible adapter:
psycopg2.extensions.register_adapter(dict, psycopg2.extras.Json)
This setting is global though, so it is not compatible with similar
adapters such as the one registered by register_hstore(). Any other
object supported by JSON can be registered the same way, but this will
clobber the default adaptation rule, so be careful to unwanted side
effects.
If you want to customize the adaptation from Python to PostgreSQL you can
either provide a custom dumps() function to Json:
curs.execute("insert into mytable (jsondata) values (%s)",
[Json({'a': 100}, dumps=simplejson.dumps)])
or you can subclass it overriding the dumps() method:
class MyJson(Json):
def dumps(self, obj):
return simplejson.dumps(obj)
curs.execute("insert into mytable (jsondata) values (%s)",
[MyJson({'a': 100})])
Customizing the conversion from PostgreSQL to Python can be done passing a
custom loads() function to register_json(). For the builtin data types
(json from PostgreSQL 9.2, jsonb from PostgreSQL 9.4) use
register_default_json() and register_default_jsonb(). For example, if you
want to convert the float values from json into
Decimal you can use:
loads = lambda x: json.loads(x, parse_float=Decimal)
psycopg2.extras.register_json(conn, loads=loads)
-
class
psycopg2.extras.Json(adapted, dumps=None)¶ An
ISQLQuotewrapper to adapt a Python object tojsondata type.Jsoncan be used to wrap any object supported by the provided dumps function. If none is provided, the standardjson.dumps()is used (simplejsonfor Python < 2.6;getquoted()will raiseImportErrorif the module is not available).-
dumps(obj)¶ Serialize obj in JSON format.
The default is to call
json.dumps()or the dumps function provided in the constructor. You can override this method to create a customized JSON wrapper.
-
-
psycopg2.extras.register_json(conn_or_curs=None, globally=False, loads=None, oid=None, array_oid=None, name='json')¶ Create and register typecasters converting
jsontype to Python objects.Parameters: - conn_or_curs – a connection or cursor used to find the
jsonandjson[]oids; the typecasters are registered in a scope limited to this object, unless globally is set toTrue. It can beNoneif the oids are provided - globally – if
Falseregister the typecasters only on conn_or_curs, otherwise register them globally - loads – the function used to parse the data into a Python object. If
Noneusejson.loads(), wherejsonis the module chosen according to the Python version (see above) - oid – the OID of the
jsontype if known; If not, it will be queried on conn_or_curs - array_oid – the OID of the
json[]array type if known; if not, it will be queried on conn_or_curs - name – the name of the data type to look for in conn_or_curs
The connection or cursor passed to the function will be used to query the database and look for the OID of the
jsontype (or an alternative type if name if provided). No query is performed if oid and array_oid are provided. RaiseProgrammingErrorif the type is not found.Changed in version 2.5.4: added the name parameter to enable
jsonbsupport.- conn_or_curs – a connection or cursor used to find the
-
psycopg2.extras.register_default_json(conn_or_curs=None, globally=False, loads=None)¶ Create and register
jsontypecasters for PostgreSQL 9.2 and following.Since PostgreSQL 9.2
jsonis a builtin type, hence its oid is known and fixed. This function allows specifying a customized loads function for the defaultjsontype without querying the database. All the parameters have the same meaning ofregister_json().
-
psycopg2.extras.register_default_jsonb(conn_or_curs=None, globally=False, loads=None)¶ Create and register
jsonbtypecasters for PostgreSQL 9.4 and following.As in
register_default_json(), the function allows to register a customized loads function for thejsonbtype at its known oid for PostgreSQL 9.4 and following versions. All the parameters have the same meaning ofregister_json().New in version 2.5.4.
Hstore data type¶
New in version 2.3.
The hstore data type is a key-value store embedded in PostgreSQL. It has
been available for several server versions but with the release 9.0 it has
been greatly improved in capacity and usefulness with the addition of many
functions. It supports GiST or GIN indexes allowing search by keys or
key/value pairs as well as regular BTree indexes for equality, uniqueness etc.
Psycopg can convert Python dict objects to and from hstore structures.
Only dictionaries with string/unicode keys and values are supported. None
is also allowed as value but not as a key. Psycopg uses a more efficient hstore
representation when dealing with PostgreSQL 9.0 but previous server versions
are supported as well. By default the adapter/typecaster are disabled: they
can be enabled using the register_hstore() function.
-
psycopg2.extras.register_hstore(conn_or_curs, globally=False, str=False, oid=None, array_oid=None)¶ Register adapter and typecaster for
dict-hstoreconversions.Parameters: - conn_or_curs – a connection or cursor: the typecaster will be
registered only on this object unless globally is set to
True - globally – register the adapter globally, not only on conn_or_curs
- unicode – if
True, keys and values returned from the database will beunicodeinstead ofstr. The option is not available on Python 3 - oid – the OID of the
hstoretype if known. If not, it will be queried on conn_or_curs. - array_oid – the OID of the
hstorearray type if known. If not, it will be queried on conn_or_curs.
The connection or cursor passed to the function will be used to query the database and look for the OID of the
hstoretype (which may be different across databases). If querying is not desirable (e.g. with asynchronous connections) you may specify it in the oid parameter, which can be found using a query such asSELECT 'hstore'::regtype::oid. Analogously you can obtain a value for array_oid using a query such asSELECT 'hstore[]'::regtype::oid.Note that, when passing a dictionary from Python to the database, both strings and unicode keys and values are supported. Dictionaries returned from the database have keys/values according to the unicode parameter.
The
hstorecontrib module must be already installed in the database (executing thehstore.sqlscript in yourcontribdirectory). RaiseProgrammingErrorif the type is not found.Changed in version 2.4: added the oid parameter. If not specified, the typecaster is installed also if
hstoreis not installed in thepublicschema.Changed in version 2.4.3: added support for
hstorearray.- conn_or_curs – a connection or cursor: the typecaster will be
registered only on this object unless globally is set to
Composite types casting¶
New in version 2.4.
Using register_composite() it is possible to cast a PostgreSQL composite
type (either created with the CREATE TYPE command or implicitly defined
after a table row type) into a Python named tuple, or into a regular tuple if
collections.namedtuple() is not found.
>>> cur.execute("CREATE TYPE card AS (value int, suit text);")
>>> psycopg2.extras.register_composite('card', cur)
<psycopg2.extras.CompositeCaster object at 0x...>
>>> cur.execute("select (8, 'hearts')::card")
>>> cur.fetchone()[0]
card(value=8, suit='hearts')
Nested composite types are handled as expected, provided that the type of the composite components are registered as well.
>>> cur.execute("CREATE TYPE card_back AS (face card, back text);")
>>> psycopg2.extras.register_composite('card_back', cur)
<psycopg2.extras.CompositeCaster object at 0x...>
>>> cur.execute("select ((8, 'hearts'), 'blue')::card_back")
>>> cur.fetchone()[0]
card_back(face=card(value=8, suit='hearts'), back='blue')
Adaptation from Python tuples to composite types is automatic instead and requires no adapter registration.
Note
If you want to convert PostgreSQL composite types into something different
than a namedtuple you can subclass the CompositeCaster overriding
make(). For example, if you want to convert your type
into a Python dictionary you can use:
>>> class DictComposite(psycopg2.extras.CompositeCaster):
... def make(self, values):
... return dict(zip(self.attnames, values))
>>> psycopg2.extras.register_composite('card', cur,
... factory=DictComposite)
>>> cur.execute("select (8, 'hearts')::card")
>>> cur.fetchone()[0]
{'suit': 'hearts', 'value': 8}
-
psycopg2.extras.register_composite(name, conn_or_curs, globally=False, factory=None)¶ Register a typecaster to convert a composite type into a tuple.
Parameters: - name – the name of a PostgreSQL composite type, e.g. created using
the
CREATE TYPEcommand - conn_or_curs – a connection or cursor used to find the type oid and
components; the typecaster is registered in a scope limited to this
object, unless globally is set to
True - globally – if
False(default) register the typecaster only on conn_or_curs, otherwise register it globally - factory – if specified it should be a
CompositeCastersubclass: use it to customize how to cast composite types
Returns: the registered
CompositeCasteror factory instance responsible for the conversionChanged in version 2.4.3: added support for array of composite types
Changed in version 2.5: added the factory parameter
- name – the name of a PostgreSQL composite type, e.g. created using
the
-
class
psycopg2.extras.CompositeCaster(name, oid, attrs, array_oid=None, schema=None)¶ Helps conversion of a PostgreSQL composite type into a Python object.
The class is usually created by the
register_composite()function. You may want to create and register manually instances of the class if querying the database at registration time is not desirable (such as when using an asynchronous connections).-
make(values)¶ Return a new Python object representing the data being casted.
values is the list of attributes, already casted into their Python representation.
You can subclass this method to customize the composite cast.
New in version 2.5.
Object attributes:
-
name¶ The name of the PostgreSQL type.
-
schema¶ The schema where the type is defined.
New in version 2.5.
-
oid¶ The oid of the PostgreSQL type.
-
array_oid¶ The oid of the PostgreSQL array type, if available.
-
type¶ The type of the Python objects returned. If
collections.namedtuple()is available, it is a named tuple with attributes equal to the type components. Otherwise it is just thetupleobject.
-
attnames¶ List of component names of the type to be casted.
-
atttypes¶ List of component type oids of the type to be casted.
-
Range data types¶
New in version 2.5.
Psycopg offers a Range Python type and supports adaptation between them and
PostgreSQL range types. Builtin range types are supported out-of-the-box;
user-defined range types can be adapted using register_range().
-
class
psycopg2.extras.Range(lower=None, upper=None, bounds='[)', empty=False)¶ Python representation for a PostgreSQL
rangetype.Parameters: - lower – lower bound for the range.
Nonemeans unbound - upper – upper bound for the range.
Nonemeans unbound - bounds – one of the literal strings
(),[),(],[], representing whether the lower or upper bounds are included - empty – if
True, the range is empty
This Python type is only used to pass and retrieve range values to and from PostgreSQL and doesn’t attempt to replicate the PostgreSQL range features: it doesn’t perform normalization and doesn’t implement all the operators supported by the database.
Rangeobjects are immutable, hashable, and support theinoperator (checking if an element is within the range). They can be tested for equivalence. Empty ranges evaluate toFalsein boolean context, nonempty evaluate toTrue.Changed in version 2.5.3:
Rangeobjects can be sorted although, as on the server-side, this ordering is not particularly meangingful. It is only meant to be used by programs assuming objects usingRangeas primary key can be sorted on them. In previous versions comparingRanges raisesTypeError.Although it is possible to instantiate
Rangeobjects, the class doesn’t have an adapter registered, so you cannot normally pass these instances as query arguments. To use range objects as query arguments you can either use one of the provided subclasses, such asNumericRangeor create a custom subclass usingregister_range().Object attributes:
-
isempty¶ Trueif the range is empty.
-
lower¶ The lower bound of the range.
Noneif empty or unbound.
-
upper¶ The upper bound of the range.
Noneif empty or unbound.
-
lower_inc¶ Trueif the lower bound is included in the range.
-
upper_inc¶ Trueif the upper bound is included in the range.
-
lower_inf¶ Trueif the range doesn’t have a lower bound.
-
upper_inf¶ Trueif the range doesn’t have an upper bound.
- lower – lower bound for the range.
The following Range subclasses map builtin PostgreSQL range types to
Python objects: they have an adapter registered so their instances can be
passed as query arguments. range values read from database queries are
automatically casted into instances of these classes.
-
class
psycopg2.extras.NumericRange(lower=None, upper=None, bounds='[)', empty=False)¶ A
Rangesuitable to pass Python numeric types to a PostgreSQL range.PostgreSQL types
int4range,int8range,numrangeare casted intoNumericRangeinstances.
-
class
psycopg2.extras.DateRange(lower=None, upper=None, bounds='[)', empty=False)¶ Represents
daterangevalues.
-
class
psycopg2.extras.DateTimeRange(lower=None, upper=None, bounds='[)', empty=False)¶ Represents
tsrangevalues.
-
class
psycopg2.extras.DateTimeTZRange(lower=None, upper=None, bounds='[)', empty=False)¶ Represents
tstzrangevalues.
Note
Python lacks a representation for infinity date so Psycopg converts
the value to date.max and such. When written into the database these
dates will assume their literal value (e.g. 9999-12-31 instead of
infinity). Check Infinite dates handling for an example of
an alternative adapter to map date.max to infinity. An
alternative dates adapter will be used automatically by the DateRange
adapter and so on.
Custom range types (created with CREATE TYPE ... AS RANGE) can be
adapted to a custom Range subclass:
-
psycopg2.extras.register_range(pgrange, pyrange, conn_or_curs, globally=False)¶ Create and register an adapter and the typecasters to convert between a PostgreSQL
rangetype and a PostgreSQLRangesubclass.Parameters: - pgrange – the name of the PostgreSQL
rangetype. Can be schema-qualified - pyrange – a
Rangestrict subclass, or just a name to give to a new class - conn_or_curs – a connection or cursor used to find the oid of the
range and its subtype; the typecaster is registered in a scope limited
to this object, unless globally is set to
True - globally – if
False(default) register the typecaster only on conn_or_curs, otherwise register it globally
Returns: RangeCasterinstance responsible for the conversionIf a string is passed to pyrange, a new
Rangesubclass is created with such name and will be available as therangeattribute of the returnedRangeCasterobject.The function queries the database on conn_or_curs to inspect the pgrange type and raises
ProgrammingErrorif the type is not found. If querying the database is not advisable, use directly theRangeCasterclass and register the adapter and typecasters using the provided functions.- pgrange – the name of the PostgreSQL
-
class
psycopg2.extras.RangeCaster(pgrange, pyrange, oid, subtype_oid, array_oid=None)¶ Helper class to convert between
Rangeand PostgreSQL range types.Objects of this class are usually created by
register_range(). Manual creation could be useful if querying the database is not advisable: in this case the oids must be provided.Object attributes:
-
range¶ The
Rangesubclass adapted.
-
typecaster¶ The object responsible for casting.
-
array_typecaster¶ The object responsible to cast arrays, if available, else
None.
-
UUID data type¶
New in version 2.0.9.
Changed in version 2.0.13: added UUID array support.
>>> psycopg2.extras.register_uuid()
<psycopg2._psycopg.type object at 0x...>
>>> # Python UUID can be used in SQL queries
>>> import uuid
>>> my_uuid = uuid.UUID('{12345678-1234-5678-1234-567812345678}')
>>> psycopg2.extensions.adapt(my_uuid).getquoted()
"'12345678-1234-5678-1234-567812345678'::uuid"
>>> # PostgreSQL UUID are transformed into Python UUID objects.
>>> cur.execute("SELECT 'a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11'::uuid")
>>> cur.fetchone()[0]
UUID('a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11')
-
psycopg2.extras.register_uuid(oids=None, conn_or_curs=None)¶ Create the UUID type and an uuid.UUID adapter.
Parameters: - oids – oid for the PostgreSQL
uuidtype, or 2-items sequence with oids of the type and the array. If not specified, use PostgreSQL standard oids. - conn_or_curs – where to register the typecaster. If not specified, register it globally.
- oids – oid for the PostgreSQL
inet data type¶
New in version 2.0.9.
Changed in version 2.4.5: added inet array support.
>>> psycopg2.extras.register_inet()
<psycopg2._psycopg.type object at 0x...>
>>> cur.mogrify("SELECT %s", (Inet('127.0.0.1/32'),))
"SELECT E'127.0.0.1/32'::inet"
>>> cur.execute("SELECT '192.168.0.1/24'::inet")
>>> cur.fetchone()[0].addr
'192.168.0.1/24'
-
psycopg2.extras.register_inet(oid=None, conn_or_curs=None)¶ Create the INET type and an Inet adapter.
Parameters: - oid – oid for the PostgreSQL
inettype, or 2-items sequence with oids of the type and the array. If not specified, use PostgreSQL standard oids. - conn_or_curs – where to register the typecaster. If not specified, register it globally.
- oid – oid for the PostgreSQL
-
class
psycopg2.extras.Inet(addr)¶ Wrap a string to allow for correct SQL-quoting of inet values.
Note that this adapter does NOT check the passed value to make sure it really is an inet-compatible address but DOES call adapt() on it to make sure it is impossible to execute an SQL-injection by passing an evil value to the initializer.
Fractional time zones¶
-
psycopg2.extras.register_tstz_w_secs(oids=None, conn_or_curs=None)¶ The function used to register an alternate type caster for
TIMESTAMP WITH TIME ZONEto deal with historical time zones with seconds in the UTC offset.These are now correctly handled by the default type caster, so currently the function doesn’t do anything.
New in version 2.0.9.
Changed in version 2.2.2: function is no-op: see Time zones handling.
Coroutine support¶
-
psycopg2.extras.wait_select(conn)¶ Wait until a connection or cursor has data available.
The function is an example of a wait callback to be registered with
set_wait_callback(). This function usesselect()to wait for data available.