table of contents
PSQL(1) | PostgreSQL 13.18 Documentation | PSQL(1) |
NAME¶
psql - PostgreSQL interactive terminal
SYNOPSIS¶
psql [option...] [dbname [username]]
DESCRIPTION¶
psql is a terminal-based front-end to PostgreSQL. It enables you to type in queries interactively, issue them to PostgreSQL, and see the query results. Alternatively, input can be from a file or from command line arguments. In addition, psql provides a number of meta-commands and various shell-like features to facilitate writing scripts and automating a wide variety of tasks.
OPTIONS¶
-a
--echo-all
-A
--no-align
-b
--echo-errors
-c command
--command=command
command must be either a command string that is completely parsable by the server (i.e., it contains no psql-specific features), or a single backslash command. Thus you cannot mix SQL and psql meta-commands within a -c option. To achieve that, you could use repeated -c options or pipe the string into psql, for example:
psql -c '\x' -c 'SELECT * FROM foo;'
or
echo '\x \\ SELECT * FROM foo;' | psql
(\\ is the separator meta-command.)
Each SQL command string passed to -c is sent to the server as a single request. Because of this, the server executes it as a single transaction even if the string contains multiple SQL commands, unless there are explicit BEGIN/COMMIT commands included in the string to divide it into multiple transactions. (See Section 52.2.2.1 for more details about how the server handles multi-query strings.) Also, psql only prints the result of the last SQL command in the string. This is different from the behavior when the same string is read from a file or fed to psql's standard input, because then psql sends each SQL command separately.
Because of this behavior, putting more than one SQL command in a single -c string often has unexpected results. It's better to use repeated -c commands or feed multiple commands to psql's standard input, either using echo as illustrated above, or via a shell here-document, for example:
psql <<EOF \x SELECT * FROM foo; EOF
--csv
-d dbname
--dbname=dbname
-e
--echo-queries
-E
--echo-hidden
-f filename
--file=filename
If filename is - (hyphen), then standard input is read until an EOF indication or \q meta-command. This can be used to intersperse interactive input with input from files. Note however that Readline is not used in this case (much as if -n had been specified).
Using this option is subtly different from writing psql < filename. In general, both will do what you expect, but using -f enables some nice features such as error messages with line numbers. There is also a slight chance that using this option will reduce the start-up overhead. On the other hand, the variant using the shell's input redirection is (in theory) guaranteed to yield exactly the same output you would have received had you entered everything by hand.
-F separator
--field-separator=separator
-h hostname
--host=hostname
-H
--html
-l
--list
When this option is used, psql will connect to the database postgres, unless a different database is named on the command line (option -d or non-option argument, possibly via a service entry, but not via an environment variable).
-L filename
--log-file=filename
-n
--no-readline
-o filename
--output=filename
-p port
--port=port
-P assignment
--pset=assignment
-q
--quiet
-R separator
--record-separator=separator
-s
--single-step
-S
--single-line
Note
This mode is provided for those who insist on it, but you are not necessarily encouraged to use it. In particular, if you mix SQL and meta-commands on a line the order of execution might not always be clear to the inexperienced user.
-t
--tuples-only
-T table_options
--table-attr=table_options
-U username
--username=username
-v assignment
--set=assignment
--variable=assignment
-V
--version
-w
--no-password
Note that this option will remain set for the entire session, and so it affects uses of the meta-command \connect as well as the initial connection attempt.
-W
--password
If the server requires password authentication and a password is not available from other sources such as a .pgpass file, psql will prompt for a password in any case. However, psql will waste a connection attempt finding out that the server wants a password. In some cases it is worth typing -W to avoid the extra connection attempt.
Note that this option will remain set for the entire session, and so it affects uses of the meta-command \connect as well as the initial connection attempt.
-x
--expanded
-X
--no-psqlrc
-z
--field-separator-zero
-0
--record-separator-zero
-1
--single-transaction
If the commands themselves contain BEGIN, COMMIT, or ROLLBACK, this option will not have the desired effects. Also, if an individual command cannot be executed inside a transaction block, specifying this option will cause the whole transaction to fail.
-?
--help[=topic]
EXIT STATUS¶
psql returns 0 to the shell if it finished normally, 1 if a fatal error of its own occurs (e.g., out of memory, file not found), 2 if the connection to the server went bad and the session was not interactive, and 3 if an error occurred in a script and the variable ON_ERROR_STOP was set.
USAGE¶
Connecting to a Database¶
psql is a regular PostgreSQL client application. In order to connect to a database you need to know the name of your target database, the host name and port number of the server, and what user name you want to connect as. psql can be told about those parameters via command line options, namely -d, -h, -p, and -U respectively. If an argument is found that does not belong to any option it will be interpreted as the database name (or the user name, if the database name is already given). Not all of these options are required; there are useful defaults. If you omit the host name, psql will connect via a Unix-domain socket to a server on the local host, or via TCP/IP to localhost on machines that don't have Unix-domain sockets. The default port number is determined at compile time. Since the database server uses the same default, you will not have to specify the port in most cases. The default user name is your operating-system user name, as is the default database name. Note that you cannot just connect to any database under any user name. Your database administrator should have informed you about your access rights.
When the defaults aren't quite right, you can save yourself some typing by setting the environment variables PGDATABASE, PGHOST, PGPORT and/or PGUSER to appropriate values. (For additional environment variables, see Section 33.14.) It is also convenient to have a ~/.pgpass file to avoid regularly having to type in passwords. See Section 33.15 for more information.
An alternative way to specify connection parameters is in a conninfo string or a URI, which is used instead of a database name. This mechanism give you very wide control over the connection. For example:
$ psql "service=myservice sslmode=require" $ psql postgresql://dbmaster:5433/mydb?sslmode=require
This way you can also use LDAP for connection parameter lookup as described in Section 33.17. See Section 33.1.2 for more information on all the available connection options.
If the connection could not be made for any reason (e.g., insufficient privileges, server is not running on the targeted host, etc.), psql will return an error and terminate.
If both standard input and standard output are a terminal, then psql sets the client encoding to “auto”, which will detect the appropriate client encoding from the locale settings (LC_CTYPE environment variable on Unix systems). If this doesn't work out as expected, the client encoding can be overridden using the environment variable PGCLIENTENCODING.
Entering SQL Commands¶
In normal operation, psql provides a prompt with the name of the database to which psql is currently connected, followed by the string =>. For example:
$ psql testdb psql (13.18) Type "help" for help. testdb=>
At the prompt, the user can type in SQL commands. Ordinarily, input lines are sent to the server when a command-terminating semicolon is reached. An end of line does not terminate a command. Thus commands can be spread over several lines for clarity. If the command was sent and executed without error, the results of the command are displayed on the screen.
If untrusted users have access to a database that has not adopted a secure schema usage pattern, begin your session by removing publicly-writable schemas from search_path. One can add options=-csearch_path= to the connection string or issue SELECT pg_catalog.set_config('search_path', '', false) before other SQL commands. This consideration is not specific to psql; it applies to every interface for executing arbitrary SQL commands.
Whenever a command is executed, psql also polls for asynchronous notification events generated by LISTEN(7) and NOTIFY(7).
While C-style block comments are passed to the server for processing and removal, SQL-standard comments are removed by psql.
Meta-Commands¶
Anything you enter in psql that begins with an unquoted backslash is a psql meta-command that is processed by psql itself. These commands make psql more useful for administration or scripting. Meta-commands are often called slash or backslash commands.
The format of a psql command is the backslash, followed immediately by a command verb, then any arguments. The arguments are separated from the command verb and each other by any number of whitespace characters.
To include whitespace in an argument you can quote it with single quotes. To include a single quote in an argument, write two single quotes within single-quoted text. Anything contained in single quotes is furthermore subject to C-like substitutions for \n (new line), \t (tab), \b (backspace), \r (carriage return), \f (form feed), \digits (octal), and \xdigits (hexadecimal). A backslash preceding any other character within single-quoted text quotes that single character, whatever it is.
If an unquoted colon (:) followed by a psql variable name appears within an argument, it is replaced by the variable's value, as described in SQL Interpolation below. The forms :'variable_name' and :"variable_name" described there work as well. The :{?variable_name} syntax allows testing whether a variable is defined. It is substituted by TRUE or FALSE. Escaping the colon with a backslash protects it from substitution.
Within an argument, text that is enclosed in backquotes (`) is taken as a command line that is passed to the shell. The output of the command (with any trailing newline removed) replaces the backquoted text. Within the text enclosed in backquotes, no special quoting or other processing occurs, except that appearances of :variable_name where variable_name is a psql variable name are replaced by the variable's value. Also, appearances of :'variable_name' are replaced by the variable's value suitably quoted to become a single shell command argument. (The latter form is almost always preferable, unless you are very sure of what is in the variable.) Because carriage return and line feed characters cannot be safely quoted on all platforms, the :'variable_name' form prints an error message and does not substitute the variable value when such characters appear in the value.
Some commands take an SQL identifier (such as a table name) as argument. These arguments follow the syntax rules of SQL: Unquoted letters are forced to lowercase, while double quotes (") protect letters from case conversion and allow incorporation of whitespace into the identifier. Within double quotes, paired double quotes reduce to a single double quote in the resulting name. For example, FOO"BAR"BAZ is interpreted as fooBARbaz, and "A weird"" name" becomes A weird" name.
Parsing for arguments stops at the end of the line, or when another unquoted backslash is found. An unquoted backslash is taken as the beginning of a new meta-command. The special sequence \\ (two backslashes) marks the end of arguments and continues parsing SQL commands, if any. That way SQL and psql commands can be freely mixed on a line. But in any case, the arguments of a meta-command cannot continue beyond the end of the line.
Many of the meta-commands act on the current query buffer. This is simply a buffer holding whatever SQL command text has been typed but not yet sent to the server for execution. This will include previous input lines as well as any text appearing before the meta-command on the same line.
The following meta-commands are defined:
\a
\c or \connect [ -reuse-previous=on|off ] [ dbname [ username ] [ host ] [ port ] | conninfo ]
Specifying any of dbname, username, host or port as - is equivalent to omitting that parameter.
The new connection can re-use connection parameters from the previous connection; not only database name, user, host, and port, but other settings such as sslmode. By default, parameters are re-used in the positional syntax, but not when a conninfo string is given. Passing a first argument of -reuse-previous=on or -reuse-previous=off overrides that default. If parameters are re-used, then any parameter not explicitly specified as a positional parameter or in the conninfo string is taken from the existing connection's parameters. An exception is that if the host setting is changed from its previous value using the positional syntax, any hostaddr setting present in the existing connection's parameters is dropped. Also, any password used for the existing connection will be re-used only if the user, host, and port settings are not changed. When the command neither specifies nor reuses a particular parameter, the libpq default is used.
If the new connection is successfully made, the previous connection is closed. If the connection attempt fails (wrong user name, access denied, etc.), the previous connection will be kept if psql is in interactive mode. But when executing a non-interactive script, processing will immediately stop with an error. This distinction was chosen as a user convenience against typos on the one hand, and a safety mechanism that scripts are not accidentally acting on the wrong database on the other hand.
Examples:
=> \c mydb myuser host.dom 6432 => \c service=foo => \c "host=localhost port=5432 dbname=mydb connect_timeout=10 sslmode=disable" => \c -reuse-previous=on sslmode=require -- changes only sslmode => \c postgresql://tom@localhost/mydb?application_name=myapp
\C [ title ]
\cd [ directory ]
Tip
To print your current working directory, use \! pwd.
\conninfo
\copy { table [ ( column_list ) ] } from {
'filename' | program 'command' | stdin | pstdin } [ [ with ] (
option [, ...] ) ] [ where condition ]
\copy { table [ ( column_list ) ] | ( query ) } to {
'filename' | program 'command' | stdout | pstdout } [ [ with ]
( option [, ...] ) ]
When program is specified, command is executed by psql and the data passed from or to command is routed between the server and the client. Again, the execution privileges are those of the local user, not the server, and no SQL superuser privileges are required.
For \copy ... from stdin, data rows are read from the same source that issued the command, continuing until \. is read or the stream reaches EOF. This option is useful for populating tables in-line within a SQL script file. For \copy ... to stdout, output is sent to the same place as psql command output, and the COPY count command status is not printed (since it might be confused with a data row). To read/write psql's standard input or output regardless of the current command source or \o option, write from pstdin or to pstdout.
The syntax of this command is similar to that of the SQL COPY(7) command. All options other than the data source/destination are as specified for COPY(7). Because of this, special parsing rules apply to the \copy meta-command. Unlike most other meta-commands, the entire remainder of the line is always taken to be the arguments of \copy, and neither variable interpolation nor backquote expansion are performed in the arguments.
Tip
Another way to obtain the same result as \copy ... to is to use the SQL COPY ... TO STDOUT command and terminate it with \g filename or \g |program. Unlike \copy, this method allows the command to span multiple lines; also, variable interpolation and backquote expansion can be used.
Tip
These operations are not as efficient as the SQL COPY command with a file or program data source or destination, because all data must pass through the client/server connection. For large amounts of data the SQL command might be preferable. Also, because of this pass-through method, \copy ... from in CSV mode will erroneously treat a \. data value alone on a line as an end-of-input marker.
\copyright
\crosstabview [ colV [ colH [ colD [ sortcolH ] ] ] ]
Each column specification can be a column number (starting at 1) or a column name. The usual SQL case folding and quoting rules apply to column names. If omitted, colV is taken as column 1 and colH as column 2. colH must differ from colV. If colD is not specified, then there must be exactly three columns in the query result, and the column that is neither colV nor colH is taken to be colD.
The vertical header, displayed as the leftmost column, contains the values found in column colV, in the same order as in the query results, but with duplicates removed.
The horizontal header, displayed as the first row, contains the values found in column colH, with duplicates removed. By default, these appear in the same order as in the query results. But if the optional sortcolH argument is given, it identifies a column whose values must be integer numbers, and the values from colH will appear in the horizontal header sorted according to the corresponding sortcolH values.
Inside the crosstab grid, for each distinct value x of colH and each distinct value y of colV, the cell located at the intersection (x,y) contains the value of the colD column in the query result row for which the value of colH is x and the value of colV is y. If there is no such row, the cell is empty. If there are multiple such rows, an error is reported.
\d[S+] [ pattern ]
For some types of relation, \d shows additional information for each column: column values for sequences, indexed expressions for indexes, and foreign data wrapper options for foreign tables.
The command form \d+ is identical, except that more information is displayed: any comments associated with the columns of the table are shown, as is the presence of OIDs in the table, the view definition if the relation is a view, a non-default replica identity setting.
By default, only user-created objects are shown; supply a pattern or the S modifier to include system objects.
Note
If \d is used without a pattern argument, it is equivalent to \dtvmsE which will show a list of all visible tables, views, materialized views, sequences and foreign tables. This is purely a convenience measure.
\da[S] [ pattern ]
\dA[+] [ pattern ]
\dAc[+] [access-method-pattern [input-type-pattern]]
\dAf[+] [access-method-pattern [input-type-pattern]]
\dAo[+] [access-method-pattern [operator-family-pattern]]
\dAp[+] [access-method-pattern [operator-family-pattern]]
\db[+] [ pattern ]
\dc[S+] [ pattern ]
\dC[+] [ pattern ]
\dd[S] [ pattern ]
\dd displays descriptions for objects matching the pattern, or of visible objects of the appropriate type if no argument is given. But in either case, only objects that have a description are listed. By default, only user-created objects are shown; supply a pattern or the S modifier to include system objects.
Descriptions for objects can be created with the COMMENT(7) SQL command.
\dD[S+] [ pattern ]
\ddp [ pattern ]
The ALTER DEFAULT PRIVILEGES (ALTER_DEFAULT_PRIVILEGES(7)) command is used to set default access privileges. The meaning of the privilege display is explained in Section 5.7.
\dE[S+] [ pattern ]
\di[S+] [ pattern ]
\dm[S+] [ pattern ]
\ds[S+] [ pattern ]
\dt[S+] [ pattern ]
\dv[S+] [ pattern ]
\des[+] [ pattern ]
\det[+] [ pattern ]
\deu[+] [ pattern ]
Caution
\deu+ might also display the user name and password of the remote user, so care should be taken not to disclose them.
\dew[+] [ pattern ]
\df[anptwS+] [ pattern ]
Tip
To look up functions taking arguments or returning values of a specific data type, use your pager's search capability to scroll through the \df output.
\dF[+] [ pattern ]
\dFd[+] [ pattern ]
\dFp[+] [ pattern ]
\dFt[+] [ pattern ]
\dg[S+] [ pattern ]
\dl
\dL[S+] [ pattern ]
\dn[S+] [ pattern ]
\do[S+] [ pattern ]
\dO[S+] [ pattern ]
\dp [ pattern ]
The GRANT(7) and REVOKE(7) commands are used to set access privileges. The meaning of the privilege display is explained in Section 5.7.
\dP[itn+] [ pattern ]
If the modifier n (“nested”) is used, or a pattern is specified, then non-root partitioned relations are included, and a column is shown displaying the parent of each partitioned relation.
If + is appended to the command name, the sum of the sizes of each relation's partitions is also displayed, along with the relation's description. If n is combined with +, two sizes are shown: one including the total size of directly-attached leaf partitions, and another showing the total size of all partitions, including indirectly attached sub-partitions.
\drds [ role-pattern [ database-pattern ] ]
The ALTER ROLE (ALTER_ROLE(7)) and ALTER DATABASE (ALTER_DATABASE(7)) commands are used to define per-role and per-database configuration settings.
\dRp[+] [ pattern ]
\dRs[+] [ pattern ]
\dT[S+] [ pattern ]
\du[S+] [ pattern ]
\dx[+] [ pattern ]
\dy[+] [ pattern ]
\e or \edit [ filename ] [ line_number ]
The new contents of the query buffer are then re-parsed according to the normal rules of psql, treating the whole buffer as a single line. Any complete queries are immediately executed; that is, if the query buffer contains or ends with a semicolon, everything up to that point is executed and removed from the query buffer. Whatever remains in the query buffer is redisplayed. Type semicolon or \g to send it, or \r to cancel it by clearing the query buffer.
Treating the buffer as a single line primarily affects meta-commands: whatever is in the buffer after a meta-command will be taken as argument(s) to the meta-command, even if it spans multiple lines. (Thus you cannot make meta-command-using scripts this way. Use \i for that.)
If a line number is specified, psql will position the cursor on the specified line of the file or query buffer. Note that if a single all-digits argument is given, psql assumes it is a line number, not a file name.
Tip
See Environment, below, for how to configure and customize your editor.
\echo text [ ... ]
=> \echo `date` Tue Oct 26 21:40:57 CEST 1999
If the first argument is an unquoted -n the trailing newline is not written (nor is the first argument).
Tip
If you use the \o command to redirect your query output you might wish to use \qecho instead of this command. See also \warn.
\ef [ function_description [ line_number ] ]
The target function can be specified by name alone, or by name and arguments, for example foo(integer, text). The argument types must be given if there is more than one function of the same name.
If no function is specified, a blank CREATE FUNCTION template is presented for editing.
If a line number is specified, psql will position the cursor on the specified line of the function body. (Note that the function body typically does not begin on the first line of the file.)
Unlike most other meta-commands, the entire remainder of the line is always taken to be the argument(s) of \ef, and neither variable interpolation nor backquote expansion are performed in the arguments.
Tip
See Environment, below, for how to configure and customize your editor.
\encoding [ encoding ]
\errverbose
\ev [ view_name [ line_number ] ]
If no view is specified, a blank CREATE VIEW template is presented for editing.
If a line number is specified, psql will position the cursor on the specified line of the view definition.
Unlike most other meta-commands, the entire remainder of the line is always taken to be the argument(s) of \ev, and neither variable interpolation nor backquote expansion are performed in the arguments.
\f [ string ]
\g [ (option=value [...]) ] [ filename ]
\g [ (option=value [...]) ] [ |command ]
If parentheses appear after \g, they surround a space-separated list of option=value formatting-option clauses, which are interpreted in the same way as \pset option value commands, but take effect only for the duration of this query. In this list, spaces are not allowed around = signs, but are required between option clauses. If =value is omitted, the named option is changed in the same way as for \pset option with no explicit value.
If a filename or |command argument is given, the query's output is written to the named file or piped to the given shell command, instead of displaying it as usual. The file or command is written to only if the query successfully returns zero or more tuples, not if the query fails or is a non-data-returning SQL command.
If the current query buffer is empty, the most recently sent query is re-executed instead. Except for that behavior, \g without any arguments is essentially equivalent to a semicolon. With arguments, \g provides a “one-shot” alternative to the \o command, and additionally allows one-shot adjustments of the output formatting options normally set by \pset.
When the last argument begins with |, the entire remainder of the line is taken to be the command to execute, and neither variable interpolation nor backquote expansion are performed in it. The rest of the line is simply passed literally to the shell.
\gdesc
If the current query buffer is empty, the most recently sent query is described instead.
\gexec
=> SELECT format('create index on my_table(%I)', attname) -> FROM pg_attribute -> WHERE attrelid = 'my_table'::regclass AND attnum > 0 -> ORDER BY attnum -> \gexec CREATE INDEX CREATE INDEX CREATE INDEX CREATE INDEX
The generated queries are executed in the order in which the rows are returned, and left-to-right within each row if there is more than one column. NULL fields are ignored. The generated queries are sent literally to the server for processing, so they cannot be psql meta-commands nor contain psql variable references. If any individual query fails, execution of the remaining queries continues unless ON_ERROR_STOP is set. Execution of each query is subject to ECHO processing. (Setting ECHO to all or queries is often advisable when using \gexec.) Query logging, single-step mode, timing, and other query execution features apply to each generated query as well.
If the current query buffer is empty, the most recently sent query is re-executed instead.
\gset [ prefix ]
=> SELECT 'hello' AS var1, 10 AS var2 -> \gset => \echo :var1 :var2 hello 10
If you specify a prefix, that string is prepended to the query's column names to create the variable names to use:
=> SELECT 'hello' AS var1, 10 AS var2 -> \gset result_ => \echo :result_var1 :result_var2 hello 10
If a column result is NULL, the corresponding variable is unset rather than being set.
If the query fails or does not return one row, no variables are changed.
If the current query buffer is empty, the most recently sent query is re-executed instead.
\gx [ (option=value [...]) ] [ filename ]
\gx [ (option=value [...]) ] [ |command ]
\h or \help [ command ]
Unlike most other meta-commands, the entire remainder of the line is always taken to be the argument(s) of \help, and neither variable interpolation nor backquote expansion are performed in the arguments.
Note
To simplify typing, commands that consists of several words do not have to be quoted. Thus it is fine to type \help alter table.
\H or \html
\i or \include filename
If filename is - (hyphen), then standard input is read until an EOF indication or \q meta-command. This can be used to intersperse interactive input with input from files. Note that Readline behavior will be used only if it is active at the outermost level.
Note
If you want to see the lines on the screen as they are read you must set the variable ECHO to all.
\if expression
\elif expression
\else
\endif
The \if and \elif commands read their argument(s) and evaluate them as a boolean expression. If the expression yields true then processing continues normally; otherwise, lines are skipped until a matching \elif, \else, or \endif is reached. Once an \if or \elif test has succeeded, the arguments of later \elif commands in the same block are not evaluated but are treated as false. Lines following an \else are processed only if no earlier matching \if or \elif succeeded.
The expression argument of an \if or \elif command is subject to variable interpolation and backquote expansion, just like any other backslash command argument. After that it is evaluated like the value of an on/off option variable. So a valid value is any unambiguous case-insensitive match for one of: true, false, 1, 0, on, off, yes, no. For example, t, T, and tR will all be considered to be true.
Expressions that do not properly evaluate to true or false will generate a warning and be treated as false.
Lines being skipped are parsed normally to identify queries and backslash commands, but queries are not sent to the server, and backslash commands other than conditionals (\if, \elif, \else, \endif) are ignored. Conditional commands are checked only for valid nesting. Variable references in skipped lines are not expanded, and backquote expansion is not performed either.
All the backslash commands of a given conditional block must appear in the same source file. If EOF is reached on the main input file or an \include-ed file before all local \if-blocks have been closed, then psql will raise an error.
Here is an example:
-- check for the existence of two separate records in the database and store -- the results in separate psql variables SELECT
EXISTS(SELECT 1 FROM customer WHERE customer_id = 123) as is_customer,
EXISTS(SELECT 1 FROM employee WHERE employee_id = 456) as is_employee \gset \if :is_customer
SELECT * FROM customer WHERE customer_id = 123; \elif :is_employee
\echo 'is not a customer but is an employee'
SELECT * FROM employee WHERE employee_id = 456; \else
\if yes
\echo 'not a customer or employee'
\else
\echo 'this will never print'
\endif \endif
\ir or \include_relative filename
\l[+] or \list[+] [ pattern ]
\lo_export loid filename
Tip
Use \lo_list to find out the large object's OID.
\lo_import filename [ comment ]
foo=> \lo_import '/home/peter/pictures/photo.xcf' 'a picture of me' lo_import 152801
The response indicates that the large object received object ID 152801, which can be used to access the newly-created large object in the future. For the sake of readability, it is recommended to always associate a human-readable comment with every object. Both OIDs and comments can be viewed with the \lo_list command.
Note that this command is subtly different from the server-side lo_import because it acts as the local user on the local file system, rather than the server's user and file system.
\lo_list
\lo_unlink loid
Tip
Use \lo_list to find out the large object's OID.
\o or \out [ filename ]
\o or \out [ |command ]
If the argument begins with |, then the entire remainder of the line is taken to be the command to execute, and neither variable interpolation nor backquote expansion are performed in it. The rest of the line is simply passed literally to the shell.
“Query results” includes all tables, command responses, and notices obtained from the database server, as well as output of various backslash commands that query the database (such as \d); but not error messages.
Tip
To intersperse text output in between query results, use \qecho.
\p or \print
\password [ username ]
\prompt [ text ] name
By default, \prompt uses the terminal for input and output. However, if the -f command line switch was used, \prompt uses standard input and standard output.
\pset [ option [ value ] ]
\pset without any arguments displays the current status of all printing options.
Adjustable printing options are:
border
columns
csv_fieldsep
expanded (or x)
fieldsep
fieldsep_zero
footer
format
aligned format is the standard, human-readable, nicely formatted text output; this is the default.
unaligned format writes all columns of a row on one line, separated by the currently active field separator. This is useful for creating output that might be intended to be read in by other programs, for example, tab-separated or comma-separated format. However, the field separator character is not treated specially if it appears in a column's value; so CSV format may be better suited for such purposes.
csv format
writes column values separated by commas, applying the quoting rules described in RFC 4180. This output is compatible with the CSV format of the server's COPY command. A header line with column names is generated unless the tuples_only parameter is on. Titles and footers are not printed. Each row is terminated by the system-dependent end-of-line character, which is typically a single newline (\n) for Unix-like systems or a carriage return and newline sequence (\r\n) for Microsoft Windows. Field separator characters other than comma can be selected with \pset csv_fieldsep.
wrapped format is like aligned but wraps wide data values across lines to make the output fit in the target column width. The target width is determined as described under the columns option. Note that psql will not attempt to wrap column header titles; therefore, wrapped format behaves the same as aligned if the total width needed for column headers exceeds the target.
The asciidoc, html, latex, latex-longtable, and troff-ms formats put out tables that are intended to be included in documents using the respective mark-up language. They are not complete documents! This might not be necessary in HTML, but in LaTeX you must have a complete document wrapper. The latex format uses LaTeX's tabular environment. The latex-longtable format requires the LaTeX longtable and booktabs packages.
linestyle
ascii style uses plain ASCII characters. Newlines in data are shown using a + symbol in the right-hand margin. When the wrapped format wraps data from one line to the next without a newline character, a dot (.) is shown in the right-hand margin of the first line, and again in the left-hand margin of the following line.
old-ascii style uses plain ASCII characters, using the formatting style used in PostgreSQL 8.4 and earlier. Newlines in data are shown using a : symbol in place of the left-hand column separator. When the data is wrapped from one line to the next without a newline character, a ; symbol is used in place of the left-hand column separator.
unicode style uses Unicode box-drawing characters. Newlines in data are shown using a carriage return symbol in the right-hand margin. When the data is wrapped from one line to the next without a newline character, an ellipsis symbol is shown in the right-hand margin of the first line, and again in the left-hand margin of the following line.
When the border setting is greater than zero, the linestyle option also determines the characters with which the border lines are drawn. Plain ASCII characters work everywhere, but Unicode characters look nicer on displays that recognize them.
null
numericlocale
pager
When the pager option is off, the pager program is not used. When the pager option is on, the pager is used when appropriate, i.e., when the output is to a terminal and will not fit on the screen. The pager option can also be set to always, which causes the pager to be used for all terminal output regardless of whether it fits on the screen. \pset pager without a value toggles pager use on and off.
pager_min_lines
recordsep
recordsep_zero
tableattr (or T)
In latex-longtable format, this controls the proportional width of each column containing a left-aligned data type. It is specified as a whitespace-separated list of values, e.g., '0.2 0.2 0.6'. Unspecified output columns use the last specified value.
title (or C)
tuples_only (or t)
unicode_border_linestyle
unicode_column_linestyle
unicode_header_linestyle
Illustrations of how these different formats look can be seen in Examples, below.
Tip
There are various shortcut commands for \pset. See \a, \C, \f, \H, \t, \T, and \x.
\q or \quit
\qecho text [ ... ]
\r or \reset
\s [ filename ]
\set [ name [ value [ ... ] ] ]
\set without any arguments displays the names and values of all currently-set psql variables.
Valid variable names can contain letters, digits, and underscores. See Variables below for details. Variable names are case-sensitive.
Certain variables are special, in that they control psql's behavior or are automatically set to reflect connection state. These variables are documented in Variables, below.
\setenv name [ value ]
testdb=> \setenv PAGER less testdb=> \setenv LESS -imx4F
\sf[+] function_description
The target function can be specified by name alone, or by name and arguments, for example foo(integer, text). The argument types must be given if there is more than one function of the same name.
If + is appended to the command name, then the output lines are numbered, with the first line of the function body being line 1.
Unlike most other meta-commands, the entire remainder of the line is always taken to be the argument(s) of \sf, and neither variable interpolation nor backquote expansion are performed in the arguments.
\sv[+] view_name
If + is appended to the command name, then the output lines are numbered from 1.
Unlike most other meta-commands, the entire remainder of the line is always taken to be the argument(s) of \sv, and neither variable interpolation nor backquote expansion are performed in the arguments.
\t
\T table_options
\timing [ on | off ]
\unset name
Most variables that control psql's behavior cannot be unset; instead, an \unset command is interpreted as setting them to their default values. See Variables below.
\w or \write filename
\w or \write |command
If the argument begins with |, then the entire remainder of the line is taken to be the command to execute, and neither variable interpolation nor backquote expansion are performed in it. The rest of the line is simply passed literally to the shell.
\warn text [ ... ]
\watch [ seconds ]
If the current query buffer is empty, the most recently sent query is re-executed instead.
\x [ on | off | auto ]
\z [ pattern ]
This is an alias for \dp (“display privileges”).
\! [ command ]
Unlike most other meta-commands, the entire remainder of the line is always taken to be the argument(s) of \!, and neither variable interpolation nor backquote expansion are performed in the arguments. The rest of the line is simply passed literally to the shell.
\? [ topic ]
\;
Normally, psql will dispatch a SQL command to the server as soon as it reaches the command-ending semicolon, even if more input remains on the current line. Thus for example entering
select 1; select 2; select 3;
will result in the three SQL commands being individually sent to the server, with each one's results being displayed before continuing to the next command. However, a semicolon entered as \; will not trigger command processing, so that the command before it and the one after are effectively combined and sent to the server in one request. So for example
select 1\; select 2\; select 3;
results in sending the three SQL commands to the server in a single request, when the non-backslashed semicolon is reached. The server executes such a request as a single transaction, unless there are explicit BEGIN/COMMIT commands included in the string to divide it into multiple transactions. (See Section 52.2.2.1 for more details about how the server handles multi-query strings.) psql prints only the last query result it receives for each request; in this example, although all three SELECTs are indeed executed, psql only prints the 3.
Patterns
The various \d commands accept a pattern parameter to specify the object name(s) to be displayed. In the simplest case, a pattern is just the exact name of the object. The characters within a pattern are normally folded to lower case, just as in SQL names; for example, \dt FOO will display the table named foo. As in SQL names, placing double quotes around a pattern stops folding to lower case. Should you need to include an actual double quote character in a pattern, write it as a pair of double quotes within a double-quote sequence; again this is in accord with the rules for SQL quoted identifiers. For example, \dt "FOO""BAR" will display the table named FOO"BAR (not foo"bar). Unlike the normal rules for SQL names, you can put double quotes around just part of a pattern, for instance \dt FOO"FOO"BAR will display the table named fooFOObar.
Whenever the pattern parameter is omitted completely, the \d commands display all objects that are visible in the current schema search path — this is equivalent to using * as the pattern. (An object is said to be visible if its containing schema is in the search path and no object of the same kind and name appears earlier in the search path. This is equivalent to the statement that the object can be referenced by name without explicit schema qualification.) To see all objects in the database regardless of visibility, use *.* as the pattern.
Within a pattern, * matches any sequence of characters (including no characters) and ? matches any single character. (This notation is comparable to Unix shell file name patterns.) For example, \dt int* displays tables whose names begin with int. But within double quotes, * and ? lose these special meanings and are just matched literally.
A pattern that contains a dot (.) is interpreted as a schema name pattern followed by an object name pattern. For example, \dt foo*.*bar* displays all tables whose table name includes bar that are in schemas whose schema name starts with foo. When no dot appears, then the pattern matches only objects that are visible in the current schema search path. Again, a dot within double quotes loses its special meaning and is matched literally.
Advanced users can use regular-expression notations such as character classes, for example [0-9] to match any digit. All regular expression special characters work as specified in Section 9.7.3, except for . which is taken as a separator as mentioned above, * which is translated to the regular-expression notation .*, ? which is translated to ., and $ which is matched literally. You can emulate these pattern characters at need by writing ? for ., (R+|) for R*, or (R|) for R?. $ is not needed as a regular-expression character since the pattern must match the whole name, unlike the usual interpretation of regular expressions (in other words, $ is automatically appended to your pattern). Write * at the beginning and/or end if you don't wish the pattern to be anchored. Note that within double quotes, all regular expression special characters lose their special meanings and are matched literally. Also, the regular expression special characters are matched literally in operator name patterns (i.e., the argument of \do).
Advanced Features¶
Variables
psql provides variable substitution features similar to common Unix command shells. Variables are simply name/value pairs, where the value can be any string of any length. The name must consist of letters (including non-Latin letters), digits, and underscores.
To set a variable, use the psql meta-command \set. For example,
testdb=> \set foo bar
sets the variable foo to the value bar. To retrieve the content of the variable, precede the name with a colon, for example:
testdb=> \echo :foo bar
This works in both regular SQL commands and meta-commands; there is more detail in SQL Interpolation, below.
If you call \set without a second argument, the variable is set to an empty-string value. To unset (i.e., delete) a variable, use the command \unset. To show the values of all variables, call \set without any argument.
Note
The arguments of \set are subject to the same substitution rules as with other commands. Thus you can construct interesting references such as \set :foo 'something' and get “soft links” or “variable variables” of Perl or PHP fame, respectively. Unfortunately (or fortunately?), there is no way to do anything useful with these constructs. On the other hand, \set bar :foo is a perfectly valid way to copy a variable.
A number of these variables are treated specially by psql. They represent certain option settings that can be changed at run time by altering the value of the variable, or in some cases represent changeable state of psql. By convention, all specially treated variables' names consist of all upper-case ASCII letters (and possibly digits and underscores). To ensure maximum compatibility in the future, avoid using such variable names for your own purposes.
Variables that control psql's behavior generally cannot be unset or set to invalid values. An \unset command is allowed but is interpreted as setting the variable to its default value. A \set command without a second argument is interpreted as setting the variable to on, for control variables that accept that value, and is rejected for others. Also, control variables that accept the values on and off will also accept other common spellings of Boolean values, such as true and false.
The specially treated variables are:
AUTOCOMMIT
Note
In autocommit-off mode, you must explicitly abandon any failed transaction by entering ABORT or ROLLBACK. Also keep in mind that if you exit the session without committing, your work will be lost.
Note
The autocommit-on mode is PostgreSQL's traditional behavior, but autocommit-off is closer to the SQL spec. If you prefer autocommit-off, you might wish to set it in the system-wide psqlrc file or your ~/.psqlrc file.
COMP_KEYWORD_CASE
DBNAME
ECHO
ECHO_HIDDEN
ENCODING
ERROR
FETCH_COUNT
Tip
Although you can use any output format with this feature, the default aligned format tends to look bad because each group of FETCH_COUNT rows will be formatted separately, leading to varying column widths across the row groups. The other output formats work better.
HIDE_TABLEAM
HISTCONTROL
Note
This feature was shamelessly plagiarized from Bash.
HISTFILE
\set HISTFILE ~/.psql_history- :DBNAME
in ~/.psqlrc will cause psql to maintain a separate history for each database.
Note
This feature was shamelessly plagiarized from Bash.
HISTSIZE
Note
This feature was shamelessly plagiarized from Bash.
HOST
IGNOREEOF
Note
This feature was shamelessly plagiarized from Bash.
LASTOID
LAST_ERROR_MESSAGE
LAST_ERROR_SQLSTATE
ON_ERROR_ROLLBACK
ON_ERROR_STOP
PORT
PROMPT1
PROMPT2
PROMPT3
QUIET
ROW_COUNT
SERVER_VERSION_NAME
SERVER_VERSION_NUM
SHOW_CONTEXT
SINGLELINE
SINGLESTEP
SQLSTATE
USER
VERBOSITY
VERSION
VERSION_NAME
VERSION_NUM
SQL Interpolation
A key feature of psql variables is that you can substitute (“interpolate”) them into regular SQL statements, as well as the arguments of meta-commands. Furthermore, psql provides facilities for ensuring that variable values used as SQL literals and identifiers are properly quoted. The syntax for interpolating a value without any quoting is to prepend the variable name with a colon (:). For example,
testdb=> \set foo 'my_table' testdb=> SELECT * FROM :foo;
would query the table my_table. Note that this may be unsafe: the value of the variable is copied literally, so it can contain unbalanced quotes, or even backslash commands. You must make sure that it makes sense where you put it.
When a value is to be used as an SQL literal or identifier, it is safest to arrange for it to be quoted. To quote the value of a variable as an SQL literal, write a colon followed by the variable name in single quotes. To quote the value as an SQL identifier, write a colon followed by the variable name in double quotes. These constructs deal correctly with quotes and other special characters embedded within the variable value. The previous example would be more safely written this way:
testdb=> \set foo 'my_table' testdb=> SELECT * FROM :"foo";
Variable interpolation will not be performed within quoted SQL literals and identifiers. Therefore, a construction such as ':foo' doesn't work to produce a quoted literal from a variable's value (and it would be unsafe if it did work, since it wouldn't correctly handle quotes embedded in the value).
One example use of this mechanism is to copy the contents of a file into a table column. First load the file into a variable and then interpolate the variable's value as a quoted string:
testdb=> \set content `cat my_file.txt` testdb=> INSERT INTO my_table VALUES (:'content');
(Note that this still won't work if my_file.txt contains NUL bytes. psql does not support embedded NUL bytes in variable values.)
Since colons can legally appear in SQL commands, an apparent attempt at interpolation (that is, :name, :'name', or :"name") is not replaced unless the named variable is currently set. In any case, you can escape a colon with a backslash to protect it from substitution.
The :{?name} special syntax returns TRUE or FALSE depending on whether the variable exists or not, and is thus always substituted, unless the colon is backslash-escaped.
The colon syntax for variables is standard SQL for embedded query languages, such as ECPG. The colon syntaxes for array slices and type casts are PostgreSQL extensions, which can sometimes conflict with the standard usage. The colon-quote syntax for escaping a variable's value as an SQL literal or identifier is a psql extension.
Prompting
The prompts psql issues can be customized to your preference. The three variables PROMPT1, PROMPT2, and PROMPT3 contain strings and special escape sequences that describe the appearance of the prompt. Prompt 1 is the normal prompt that is issued when psql requests a new command. Prompt 2 is issued when more input is expected during command entry, for example because the command was not terminated with a semicolon or a quote was not closed. Prompt 3 is issued when you are running an SQL COPY FROM STDIN command and you need to type in a row value on the terminal.
The value of the selected prompt variable is printed literally, except where a percent sign (%) is encountered. Depending on the next character, certain other text is substituted instead. Defined substitutions are:
%M
%m
%>
%n
%/
%~
%#
%p
%R
%x
%l
%digits
%:name:
%`command`
%[ ... %]
testdb=> \set PROMPT1 '%[%033[1;33;40m%]%n@%/%R%[%033[0m%]%# '
results in a boldfaced (1;) yellow-on-black (33;40) prompt on VT100-compatible, color-capable terminals.
%w
Note
This feature was shamelessly plagiarized from tcsh.
Command-Line Editing
psql supports the Readline library for convenient line editing and retrieval. The command history is automatically saved when psql exits and is reloaded when psql starts up. Tab-completion is also supported, although the completion logic makes no claim to be an SQL parser. The queries generated by tab-completion can also interfere with other SQL commands, e.g., SET TRANSACTION ISOLATION LEVEL. If for some reason you do not like the tab completion, you can turn it off by putting this in a file named .inputrc in your home directory:
$if psql set disable-completion on $endif
(This is not a psql but a Readline feature. Read its documentation for further details.)
ENVIRONMENT¶
COLUMNS
PGDATABASE
PGHOST
PGPORT
PGUSER
PG_COLOR
PSQL_EDITOR
EDITOR
VISUAL
PSQL_EDITOR_LINENUMBER_ARG
PSQL_EDITOR_LINENUMBER_ARG='+' PSQL_EDITOR_LINENUMBER_ARG='--line '
The default is + on Unix systems (corresponding to the default editor vi, and useful for many other common editors); but there is no default on Windows systems.
PSQL_HISTORY
PSQL_PAGER
PAGER
PSQLRC
SHELL
TMPDIR
This utility, like most other PostgreSQL utilities, also uses the environment variables supported by libpq (see Section 33.14).
FILES¶
psqlrc and ~/.psqlrc
The system-wide startup file is named psqlrc and is sought in the installation's “system configuration” directory, which is most reliably identified by running pg_config --sysconfdir. By default this directory will be ../etc/ relative to the directory containing the PostgreSQL executables. The name of this directory can be set explicitly via the PGSYSCONFDIR environment variable.
The user's personal startup file is named .psqlrc and is sought in the invoking user's home directory. On Windows, which lacks such a concept, the personal startup file is named %APPDATA%\postgresql\psqlrc.conf. The location of the user's startup file can be set explicitly via the PSQLRC environment variable.
Both the system-wide startup file and the user's personal startup file can be made psql-version-specific by appending a dash and the PostgreSQL major or minor release number to the file name, for example ~/.psqlrc-9.2 or ~/.psqlrc-9.2.5. The most specific version-matching file will be read in preference to a non-version-specific file.
.psql_history
The location of the history file can be set explicitly via the HISTFILE psql variable or the PSQL_HISTORY environment variable.
NOTES¶
If you want to use psql to connect to several servers of different major versions, it is recommended that you use the newest version of psql. Alternatively, you can keep around a copy of psql from each major version and be sure to use the version that matches the respective server. But in practice, this additional complication should not be necessary.
NOTES FOR WINDOWS USERS¶
psql is built as a “console application”. Since the Windows console windows use a different encoding than the rest of the system, you must take special care when using 8-bit characters within psql. If psql detects a problematic console code page, it will warn you at startup. To change the console code page, two things are necessary:
EXAMPLES¶
The first example shows how to spread a command over several lines of input. Notice the changing prompt:
testdb=> CREATE TABLE my_table ( testdb(> first integer not null default 0, testdb(> second text) testdb-> ; CREATE TABLE
Now look at the table definition again:
testdb=> \d my_table
Table "public.my_table"
Column | Type | Collation | Nullable | Default --------+---------+-----------+----------+---------
first | integer | | not null | 0
second | text | | |
Now we change the prompt to something more interesting:
testdb=> \set PROMPT1 '%n@%m %~%R%# ' peter@localhost testdb=>
Let's assume you have filled the table with data and want to take a look at it:
peter@localhost testdb=> SELECT * FROM my_table;
first | second -------+--------
1 | one
2 | two
3 | three
4 | four (4 rows)
You can display tables in different ways by using the \pset command:
peter@localhost testdb=> \pset border 2 Border style is 2. peter@localhost testdb=> SELECT * FROM my_table; +-------+--------+ | first | second | +-------+--------+ | 1 | one | | 2 | two | | 3 | three | | 4 | four | +-------+--------+ (4 rows) peter@localhost testdb=> \pset border 0 Border style is 0. peter@localhost testdb=> SELECT * FROM my_table; first second ----- ------
1 one
2 two
3 three
4 four (4 rows) peter@localhost testdb=> \pset border 1 Border style is 1. peter@localhost testdb=> \pset format csv Output format is csv. peter@localhost testdb=> \pset tuples_only Tuples only is on. peter@localhost testdb=> SELECT second, first FROM my_table; one,1 two,2 three,3 four,4 peter@localhost testdb=> \pset format unaligned Output format is unaligned. peter@localhost testdb=> \pset fieldsep '\t' Field separator is " ". peter@localhost testdb=> SELECT second, first FROM my_table; one 1 two 2 three 3 four 4
Alternatively, use the short commands:
peter@localhost testdb=> \a \t \x Output format is aligned. Tuples only is off. Expanded display is on. peter@localhost testdb=> SELECT * FROM my_table; -[ RECORD 1 ]- first | 1 second | one -[ RECORD 2 ]- first | 2 second | two -[ RECORD 3 ]- first | 3 second | three -[ RECORD 4 ]- first | 4 second | four
Also, these output format options can be set for just one query by using \g:
peter@localhost testdb=> SELECT * FROM my_table peter@localhost testdb-> \g (format=aligned tuples_only=off expanded=on) -[ RECORD 1 ]- first | 1 second | one -[ RECORD 2 ]- first | 2 second | two -[ RECORD 3 ]- first | 3 second | three -[ RECORD 4 ]- first | 4 second | four
When suitable, query results can be shown in a crosstab representation with the \crosstabview command:
testdb=> SELECT first, second, first > 2 AS gt2 FROM my_table;
first | second | gt2 -------+--------+-----
1 | one | f
2 | two | f
3 | three | t
4 | four | t (4 rows) testdb=> \crosstabview first second
first | one | two | three | four -------+-----+-----+-------+------
1 | f | | |
2 | | f | |
3 | | | t |
4 | | | | t (4 rows)
This second example shows a multiplication table with rows sorted in reverse numerical order and columns with an independent, ascending numerical order.
testdb=> SELECT t1.first as "A", t2.first+100 AS "B", t1.first*(t2.first+100) as "AxB", testdb(> row_number() over(order by t2.first) AS ord testdb(> FROM my_table t1 CROSS JOIN my_table t2 ORDER BY 1 DESC testdb(> \crosstabview "A" "B" "AxB" ord
A | 101 | 102 | 103 | 104 ---+-----+-----+-----+-----
4 | 404 | 408 | 412 | 416
3 | 303 | 306 | 309 | 312
2 | 202 | 204 | 206 | 208
1 | 101 | 102 | 103 | 104 (4 rows)
2024 | PostgreSQL 13.18 |