To add an integrity constriant to a table, it can either be defined at tabel creation or with the command ALTER TABLE
At table creation:
create table unicas ( los_valores integer primary key, descripcion varchar2(40), fecha date, constraint solo_una unique(los_valores,fecha) ); create table exclusive ( value integer primary key, description varchar2(40), year date, constraint just_one unique(value,year) );
Once a table has been created, a constraint can be added with the alter table add
command.
alter table nueva add (constraint solo_una primary key (id)); alter table los_carros add ( constraint lejana foreign key (id) references proprietarios (id)); alter table los_carros add ( constraint marcas check (modelos in ('ford','gm','vw'))); alter table exclusive add (constraint just_one primary key (id)); alter table cars add ( constraint drivers foreign key (id) references owners (id)); alter table cars add ( constraint models check (models in ('ford','gm','vw')));
Integrity constraints can also be dropped with the command alter table XXXX drop . There can be three parameters for constraint elimination.
alter table XXXX drop primary key ( This drop whatever primary key there is) alter table XXXX drop unique (This drops the unique constraint on a table) alter table XXXX drop constraint <constraint_name> (Drops the constraint by name, this can be queried at USER_CONSTRAINTS )
NOTE: In case any constraint is referenced by a foreign key, then Oracle will return an error, in order to pass through this the key word cascade
is specified in order to drop all dependencies on the constraint.
Constraint information is located at USER_CONSTRAINTS, the search condition in this table indicates what are the specifics for the constraints. This table can be queried based on the constraint name or on a specific table.
Direct load INSERT takes advantage of the same Direct Path mode of SQL*Loader capability, which bypasses the database buffer cache, generates no redo-log information, and writes directly to the datafiles. You can also put a table,index or Tablespace in no-logging mode,which additionally enhances your direct load insert. Direct load INSERTs trade off space for performance because they insert rows above the HWM (High Water Mark) of a segment. This wastes space but is not a major issue if a segment is often close to full, does not experience major grow-shrink patterns, and (especially) has a tablespace to itself.
Direct load insert appends the inserted data after existing data in a table; free space within the existing data is not reused . This functionality is similar to that of the Direct Loader utility (SQL*LOADER). A major benefit of direct-load INSERT is that you can load data without logging redo or undo entries, which imporves insert performanace significantly. With the conventional path insert, in contrast, free space in the object is reused and referential integrity can be maintained. The conventional path for insertions cannot be parallelized
Direct Load insert can be performed through one of these options:
INSERT INTO emp SELECT * FROM t_emp;
NOTE on COMPARASION with CTAS (Create table as select) : With direct-load INSERT, you can insert data into existing tables instead of having to create new tables.Direct-load INSERT updates the indexes of the table, but CREATE TABLE.. AS SELECT only creates a new table which does not have any indexes.
SQL*LOADER
To invoke SQL*Loader use:
sqlldr userid=daniel/ic control=ulcaso1.ctl log=ulcaso1.log
During Conventional Path (the default) loads, the input records are parsed accordingly to the field specifications, and each data field is copied to its corresponding bind array. When the bind array is full ( or there is no more data left to read), an array insert is executed. When SQL*Loader performs this type of load (conventional path) it competes equally with all other proccesses for buffer resources, this can slow the load significantly.
Instead of filling a bind array buffer and passing it to Oracle with a SQL insert command, a direct load path parses the input data according to the description given in the loader control file, converts the data for each input field to its corresponding Oracle column datatype, and builds a column array structure. SQL*Loader then uses the column array structures to format Oracle data blocks,the newly formatted database blocks are then written directly to the database bypassing most RDBMS processing. Direct path load is much faster than conventional path load, but entails several restrictions. You cannot use direct path for LOBs, VARRAYS, objects , or nested tables.
The script CATLDR.SQL creates the necessary views in order for direct path loads to work.
To start SQL*Loader in direct load mode, the parameter DIRECT must be set to TRUE on the command line or in the parameter file.Examples of SQL loader control file:
LOAD DATA INFILE * INTO TABLE dept FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' (deptno,dname, loc) BEGINDATA 12,RESEARCH,"SARATOGA" 10,"ACCOUNTING",CLEVELAND
The * in INFILE indicates that the DATA is in the control file itself.
LOAD DATA INFILE 'ucaso2.dat' INTO TABLE emp (empno POSISITION(01:04) INTEGER EXTERNAL, ename POSITION(06:15) CHAR job POSITION(17:25) CHAR)
The position parameter supresses the need for the terminated by parameter
The command to invoke Export is exp <username/password> PARFILE=<parameter_file>
Parameters
Import uses the same parameters and is invoked with the imp
command.
Transportable Tablespaces enable you to move tablespaces without doing a full export/import. DBAs can copy datafiles of tablespaces between identical systems (hardware arquitecture) thereby transaferring data quickly and easily. The transport must be self contained,meaning there are no reference pointing from inside the tablespace to outside the tablespace. The PL/SQL procedure transport_set_check
checks whether the tablespace is self contained.
First the tablespace to be moved is put into read-only mode to ensure that the data captured is a consistent image. Then an export is run of the metadata for the tablespace; this is fast because the amount of data is small. Next, the datafiles for the Tablespace are copied to the new system using an operating system command.At the new system, an import is done of the metadata for the tablespace, and again, this is fast, due to the amount of data. This is also a way to perform a point in time recovery on a tablespace. My moving a good copy of the tablespace into the production database,recovery can be quickly achieved.
You can use transportable tablespaces to move a subset of an Oracle database and "plug" it in to another Oracle Database, essentially moving tablespaces between the databases. Moving data via transportable tablespace can be much faster than performing either an import/export or unload/load of the same data, because transporting a tablespace only requires the copying of datafiles and integrating the tablespace structural infromation.You can also use transportable tablepspaces to move index data,therby avoiding the index rebuilds you would have to perform when importing or loading table data.
A transportable tablespace has the following limitations:
The steps for using Transportable Tablespace
execute dbms_tts.transport_set_check('OSMOSIS,MEXICO',TRUE)In case the tablespace is not self-contained (includes a foreign key constraint) then the view TRANSPORT_SET_VIOLATIONS will contain all the violations.
ALTER TABLESPACE osmosis READ ONLY;
EXP TRANSPORT_TABLSPACE=y TABLESPACES=osmosis,mexico TRIGGERS=y/n CONSTRAINTS=y/n GRANTS=y/n FILE=expdat.dmp
Even though the Export utility is used, only data dictonary structural information is exported.
When prompted connect as "sys as sysdba"
ALTER TABLESPACE osmosis READ WRITE;
IMP TRANSPORT_TABLESPACE=y DATAFILE='/db/osmosis_daniel','/db/mexico_daniel_2,...... TABLSPACE=osmosis,mexico,..... TTS_OWNERS=drubio,aveytia FROMUSER=drubio,aveytia TOUSER=paraiza,faraiza FILE=expdat.dmp
IMP TRANPSORT_TABLESPACE=y DATAFILES='(/u01/oradata/staging.f,/u01/oradata/staging2.f)' IMPORT TRANSPORT_TABLESPACE=y DATAFILES='/u01/staging.f' TABLESPACE=daniel OWNERS=rubio
TABLESPACES,TTS_OWNERS,FROMUSER and TOUSER are optional. The FILE parameter specifies the name of the structural information export file. TTS_OWNERS lists all users who own data in the tablespace set. When you specify TTS_OWNERS, the user names are compared to those in the export file. Import returns an error if there is any mismatch. If you do not specify FROMUSER and TOUSER, all database objects (such as tables and indexes) will be created under the same user as in the source database.Those users must already exist in the target database. If not, import will return an error indicating that some required users do not exist in the target database.
A profile is a database entity that specified resource limits. The database must have the resource limits turned on in order for the profile resource limits to take affect. This is done with the parameter RESOURCE_LIMIT in the init.ora file set to true . A profile can limit the number of sessions, the CPU, usage per session, and connect time. A profile can also enforce password management rules:
If no profiles are specified then the default profile will be enabled for every user ( which is unlimited resources ). The following resources can be limited via profiles:
>utlpwdmg.sql
script. This function should be created in the SYS schema.In order to create a profile the follwing commands are issued:
create profile JAVA_DES FAILED_LOGIN_ATTEMPTS 5 PASSWORD_REUSE_MAX 2;
To add parameters to the profile:
alter profile JAVA_DES limit idle_time 60;
To assign this profile to an existing user or newly generated user:
create user alexia identified by alexia profile JAVA_DES
In the profile defined above in case the user fails to login after five attempts the account will be locked. In order to unlock it (the DBA) you have to issue alter user alexia account unlock;
, you can also lock the account manually with the command alter user alexia account lock
. NOTE: A DBA can also expire a password manually with a command like: alter user alexia password expired
See previous parameters to limit resources
The parameters which are necessary to create a user are identified
which will eventually become the password for the user. Other parameters which are recommended at creation are default tablespace
> and temp tablespace
.
These parameters can always be specified at a later time with the command alter user
, when the user is created the user still needs to be granted privileges on the database in order to use it, at the very minimum the system role connect
is granted, if this role is granted, the grant quota
will still need to be issued in order for the user to create objects on the tablespaces.
A user is normally altered to grant privileges, or to modify the tablespace and quotas the user is allowed access to:
alter user daniel default tablespace quota 50M on OCP quota 0M on SYSTEM
To drop a user the command drop user
is used. The drop user has one optional parameter cascade
which drops all objects in the users schema before dropping the user. NOTE on cascade: Any views, synonyms, procedures, functions or packages that referenced object in the schema of the dropped user will be marked as invalid.
Via DBA_USERS
, USER_USERS
, ALL_USERS
and:
DBA_ROLE_PRIVS
: Displays which users have been assigned certain roles.DBA_SYS_PRIVS
: Displays which users have been assigned system privileges.DBA_TAB_PRIVS
: Displays which users have been assigned table privileges.DBA_COL_PRIVS
: Displays which users have been assigned column privileges.The views:
GRANT SELECT ON los_carros TO daniel REVOKE SELECT ON los_carros FROM daniel GRANT SELECT ON cars TO daniel REVOKE SELECT ON cars FROM daniel
See section on Passwords|Resources.
A role can optionally require authorization when a user attempts to enable the role. Role authorization can be maintained by the database (using passwords), by the operating system, or by a network service. To alter the authorization method for a role, you must have the ALTER ANY ROLE system privilege or have been granted the role with the ADMIN OPTION.
If you are granted a role protected by a password, you can enable or disable the role only by supplying the proper password for the role in a SET ROLE statement. If the role is created without any protection, the role can be enabled or disabled by any grantee with the alter role command.
ALTER ROLE desarrolladores IDENTIFIED externally; ALTER ROLE desarrolladores IDENTIFIED globally; ALTER ROLE desarrolladores IDENTIFIED by javaclat;
DROP ROLE no_sirve; DROP ROLE useless;
The following are predefined roles and the default privileges for each one:
The views are:
A character set is specified when creating a database, this will determine what languages can be represented in the database. This choice will influence how you create the database schema and develop applications that process character data. It will also influence interoperatbility with operating system resources and database performance.
When processing characters, computer systems handle charcter data as numeric codes rather than their graphical representation. A group of characters (e.g alphabetic characters, ideographs, symbols, punctuation marks , control characters) can be encoded as a coded character set. A coded character set assings unique numeric codes to each character in the character reportoire. There are many different character code sets used in the industry , and Oracle supports most national,international and vendor-specific encoded character sets.
Character set differ in :
The first choice to make in choosing a character set will be based on what languages you wish to store in the database. The characters that are encoded in a character set depend on the writing system that will be represented. A Writing System can be used to represent a language or a group of languages. In general terms Writing Systems can be classified in phonetic and ideographic.
UNICODE is a universal character set that encompasses most major scripts of the modern and ancient world. The Unicode character set provides support for a character repertoire of approx 39,000 characters and continues to grow.
Encoding Schemes have different performance characterisitics, and can influence your database schema and application development requirements for handling character data, so you need to be aware of the characterisitics of the encoding scheme used by the character set you choose,character sets typically consist of the following encoding schemes:
The character set is used by Oracle for :
NCHAR: In some cases, you may wish to have the ability to choose an alternate character set for the database because the properties of a different character encoding schemes may be more desirable for extensive character processing operations,or to facilitate ease of programming.
In particular the following datatypes can be used with an alternate character set : NCHAR,NVARCHAR2,NCLOB. Specifying an NCHAR character set allows you to specify an alternate character set from the database character set for use in NCHAR,NVARCHAR2 and NCLOB columns.This can be particularly usefull for customers using variable-width multibyte encoding schemes, wheras the database character cannot. The benefits of using fixed-width multibyte encoding over a variable width are:
NOTE: All SQL commands will use the database character set, not the NCHAR set.Therfore literals can only be specified in the database character set. Some string operations will be faster if you choose a fixed-width character set for the national character set. Since SQL text can only be represented by the database character set, and not the NCHAR character set, you must choose an NCHAR character set with which either has an equivalent or subset character repertoire of the database character set.
The database character set should always be a superset or equivalent of the clients operating system native character set. In order to change the character set of a particular database, it is necessary to do a full export/import to properly convert all data to the new character set. However , if and only if, the new charcter set is a strict superset of the current character set, it is possible to issue the command >ALTER DATABASE CHARACTER SET
.
The following character sets can take advantange of this command,when the original character set is US7ASCII,since US7ASCII is a subset of them : WE8ISO8859P1, AL24UTFFSS , and UTF8.
The syntax for the command is:
ALTER DATABASE [db_name] CHARACTER SET <new_characater_set> ALTER DATABASE [db_name] NATIONAL CHARACTER SET <new_NCHAR_character_set>
The view V$NLS_PARAMETERS describes the NLS parameters for the session.
The language dependant operations are controlled by a number of parameters and evironment variables, since Oracle uses a client/server arquitecture, the client and server may in the same or different locale or language, in the event the the client and server run different character sets, Oracle will handle character set conversion of strings automatically.
The locale-specific NLS data is stored in a directory specified by the ORA_NLS33 directory on most systems this variable is set to $ORACLE_HOME/ocommon/nls/admin/data
.
There are four ways to specify NLS parameters:
NLS_TERRITORY = "CZECH REPUBLIC"
.% export NLS_SORT=FRENCH
.ALTER SESSION SET NLS_SORT = FRENCH
.TO_CHAR(hirdate,'DD/MM/YYYY','nls_date_language = FRENCH')
.Parameter | Description | Default | Scope(I= INIT.ORA, E= Environment Variable,A= Alter Session) |
---|---|---|---|
NLS_CALENDAR | Calendar system | Gregorian | I, -, A |
NLS_COMP | SQL Operator comparison | Binary | -, E, A |
NLS_CREDIT | Credit accounting symbol | NLS_TERRITORY | I, E, A |
NLS_CURRENCY | Local currency symbol | NLS_TERRITORY | I, E, A |
NLS_DATE_FORMAT | Date format | NLS_TERRITORY | I, E, A |
NLS_DATE_LANGUAGE | Language for day and month names | NLS_LANGUAGE | I, E, A |
NLS_DEBIT | Debit accounting symbol | NLS_TERRITORY | I, E, A |
NLS_ISO_CURRENCY | ISO international currency symbol | NLS_TERRITORY | I, E, A |
NLS_LANG | Language, territory, character set | American_America.US7ASCII | -, E, - |
NLS_LANGUAGE | Language | NLS_LANG | I, -, A |
NLS_LIST_SEPARATOR | Character separating items in a list | NLS_TERRITORY | I, -, A |
NLS_MONETARY_CHARACTERS | Monetary symbol dollar and cents (or their equivalents) | NLS_TERRITORY | I, E, A |
NLS_NCHAR | National character set | NLS_LANG | -, E, - |
NLS_NUMERIC_CHARACTERS | Decimal character and group separator | NLS_TERRITORY | I, E, A |
NLS_SORT | Character Sort Sequence | NLS_LANGUAGE | I, E, A |
NLS_TERRITORY | Territory | NLS_LANG | I, -, A |
NLS_DUAL_CURRENCY | Dual currency symbol | NLS_TERRITORY | I, E, A |
NLS_LANG: This is the simplest way to specify a locale beahavior, a locale is a linguistic and cultural environment in which a program is running, it sets the language, territory and character set used by the database for both the server session and client appication. Using this one parameter ensures that the laguage and territory environment for both the server and client are the same. This parameter has three components : language,territory and charset specificed in the form NLS_LANG = language_territory.charset
.
Each component controls a subset of the NLS features:
NLS_LANG, is specifed as an environment variable in the client, it can be specified as % export NLS_LANG=FRENCH_FRANCE.WE8DEC
. Because NLS_LANG is an environment variable it is read by the client at startup.The client communicates the information defined by NLS_LANG to the server when it connects.
If NLS_LANG is used in init.ora it sets the NLS language and territory environment used by the database for both the server and client, ensuring that the language environments for both are automatically the same. In case the environment needs to be refined the parameters NLS_TERRITORY and NLS_LANGUAGE can be used separtley.
NLS_LANGUAGE: Specifies the language for server messages, language for the day and month and abbreviations, default sorting sequence for character set data when ORDER BY is specified. The NLS_LANGUAGE value specified in the init.ora parameter will be the default for all session to that instance.
NLS allows you to retrieve data in native languages, it ensures that database utilities and error messages, sort order,date,time, monetary,numeric and calender conventions automatically adapt to the native language and locale.
NLS is implemented with the NLS runtime library, which provides a set of language-independant functions which are governed by a set of locale specific data identified and loaded at runtime. See the NLS details specified previously.
The view V$NLS_PARAMETERS hold information on the NLS variables.