Sql loader control file position


















Let's use the following sample data as reference:. Is CHAR 10 the trick here? Counting the two spaces behind 'Munising', it has 10 characters. Yes, when end position is not specified, it is derived from the datatype. Here the starting position of data field is 1. Ending position is not specified, but is derived from the datatype, that is Sqlloader skips 1 byte and reads next 2 bytes, as derived from char 2 datatype.

As to why zipcode is CHAR, zip code is considered simply a fixed length string. You are not going to do any arithmetic operations on it.

So, CHAR is appropriate for it. Also, have a look at SQL Loader datatypes. However, the Oracle database supports only UTF encoding with big-endian byte ordering AL16UTF16 and only as a database national character set, not as a database character set. When data character set conversion is required, the target character set should be a superset of the source data file character set. Otherwise, characters that have no equivalent in the target character set are converted to replacement characters, often a default character such as a question mark?

This causes loss of data. If they are specified in bytes, and data character set conversion is required, then the converted values may take more bytes than the source values if the target character set uses more bytes than the source character set for any character that is converted. This will result in the following error message being reported if the larger target value exceeds the size of the database column:. You can avoid this problem by specifying the database column size in characters and also by using character sizes in the control file to describe the data.

Another way to avoid this problem is to ensure that the maximum column size is large enough, in bytes, to hold the converted value. Character-Length Semantics. Rows might be rejected because a field is too large for the database column, but in reality the field is not too large. A load might be abnormally terminated without any rows being loaded, when only the field that really was too large should have been rejected.

Parent topic: Input Character Conversion. Normally, the specified name must be the name of an Oracle-supported character set.

However, because you are allowed to set up data using the byte order of the system where you create the data file, the data in the data file can be either big-endian or little-endian. Therefore, a different character set name UTF16 is used.

All primary data files are assumed to be in the same character set. Byte Ordering. Oracle Database Globalization Support Guide for more information about the names of the supported character sets. Control File Character Set. If the control file character set is different from the data file character set, then keep the following issue in mind.

To ensure that the specifications are correct, you may prefer to specify hexadecimal strings, rather than character string values. If hexadecimal strings are used with a data file in the UTF Unicode encoding, then the byte order is different on a big-endian versus a little-endian system.

For example, "," comma in UTF on a big-endian system is X'c'. On a little-endian system it is X'2c00'. This allows the same syntax to be used in the control file on both a big-endian and a little-endian system. For example, the specification CHAR 10 in the control file can mean 10 bytes or 10 characters. These are equivalent if the data file uses a single-byte character set. However, they are often different if the data file uses a multibyte character set.

To avoid insertion errors caused by expansion of character strings during character set conversion, use character-length semantics in both the data file and the target database columns. Byte-length semantics are the default for all data files except those that use the UTF16 character set which uses character-length semantics by default.

The following data types use byte-length semantics even if character-length semantics are being used for the data file, because the data is binary, or is in a special binary-encoded form in the case of ZONED and DECIMAL :. This is necessary to handle data files that have a mix of data of different data types, some of which use character-length semantics, and some of which use byte-length semantics.

The SMALLINT length field takes up a certain number of bytes depending on the system usually 2 bytes , but its value indicates the length of the character string in characters.

Character-length semantics in the data file can be used independent of whether character-length semantics are used for the database columns. Therefore, the data file and the database columns can use either the same or different length semantics. The fastest way to load shift-sensitive character data is to use fixed-position fields without delimiters. To improve performance, remember the following points:. If blanks are not preserved and multibyte-blank-checking is required, then a slower path is used.

This can happen when the shift-in byte is the last byte of a field after single-byte blank stripping is performed. Additionally, when an interrupted load is continued, the use and value of the SKIP parameter can vary depending on the particular case. The following sections explain the possible scenarios. In a conventional path load, data is committed after all data in the bind array is loaded into all tables. If the load is discontinued, then only the rows that were processed up to the time of the last commit operation are loaded.

There is no partial commit of data. Parent topic: Interrupted Loads. In a direct path load, the behavior of a discontinued load varies depending on the reason the load was discontinued.

Space errors when loading data into multiple subpartitions that is, loading into a partitioned table, a composite partitioned table, or one partition of a composite partitioned table :. If space errors occur when loading into multiple subpartitions, then the load is discontinued and no data is saved unless ROWS has been specified in which case, all data that was previously committed will be saved.

The reason for this behavior is that it is possible rows might be loaded out of order. This is because each row is assigned not necessarily in order to a partition and each partition is loaded separately.

If the load discontinues before all rows assigned to partitions are loaded, then the row for record "n" may have been loaded, but not the row for record "n-1".

Space errors when loading data into an unpartitioned table, one partition of a partitioned table, or one subpartition of a composite partitioned table:. In either case, this behavior is independent of whether the ROWS parameter was specified. When you continue the load, you can use the SKIP parameter to skip rows that have already been loaded. Parent topic: Discontinued Direct Path Loads.

This means that when you continue the load, the value you specify for the SKIP parameter may be different for different tables. If a fatal error is encountered, then the load is stopped and no data is saved unless ROWS was specified at the beginning of the load. In that case, all data that was previously committed is saved. This means that the value of the SKIP parameter will be the same for all tables.

When a load is discontinued, any data already loaded remains in the tables, and the tables are left in a valid state. If the direct path load method is used, then any indexes on the table are left in an unusable state. You can either rebuild or re-create the indexes before continuing, or after the load is restarted and completes. Other indexes are valid if no other errors occurred. See Indexes Left in an Unusable State for other reasons why an index might be left in an unusable state.

To continue the discontinued load, use the SKIP parameter to specify the number of logical records that have already been processed by the previous load. At the time the load is discontinued, the value for SKIP is written to the log file in a message similar to the following:. This message specifying the value of the SKIP parameter is preceded by a message indicating why the load was discontinued. Note that for multiple-table loads, the value of the SKIP parameter is displayed only if it is the same for all tables.

To combine multiple physical records into one logical record, you can use one of the following clauses, depending on your data:. In the following example, integer specifies the number of physical records to combine. For example, two records might be combined if a pound sign were in byte position 80 of the first record. If any other character were there, then the second record would not be added to the first.

If the condition is true in the current record, then the next physical record is read and concatenated to the current physical record, continuing until the condition is false. If the condition is false, then the current physical record becomes the last physical record of the current logical record. THIS is the default. If the condition is true in the next record, then the current physical record is concatenated to the current logical record, continuing until the condition is false. For the equal operator, the field and comparison string must match exactly for the condition to be true.

For the not equal operator, they can differ in any character. This test is similar to THIS, but the test is always against the last nonblank character. If the last nonblank character in the current physical record meets the test, then the next physical record is read and concatenated to the current physical record, continuing until the condition is false.

If the condition is false in the current record, then the current physical record is the last physical record of the current logical record. Column numbers start with 1. Either a hyphen or a colon is acceptable start-end or start:end.

If you omit end , then the length of the continuation field is the length of the byte string or character string. If you use end , and the length of the resulting continuation field is not the same as that of the byte string or the character string, then the shorter one is padded. Character strings are padded with blanks, hexadecimal strings with zeros.

A string of characters to be compared to the continuation field defined by start and end, according to the operator. The string must be enclosed in double or single quotation marks. The comparison is made character by character, blank padding on the right if necessary. A string of bytes in hexadecimal format used in the same way as str. X'1FB' would represent the three bytes with values 1F, B0, and 33 hexadecimal. The default is to exclude them. This is the only time you refer to positions in physical records.

All other references are to logical records. That is, data values are allowed to span the records with no extra characters continuation characters in the middle.

This means that the continuation characters are removed if they are in positions 3 through 5 of the record. It also means that the characters in positions 3 through 5 are removed from the record even if the continuation characters are not in positions 3 through 5.

Assume that you have the same physical records as in Example Note that columns 1 and 2 are not removed from the physical records when the logical records are assembled. Therefore, the logical records are assembled as follows the same results as for Example The specification of fields and data types is described in later sections.

Parent topic: Loading Logical Records into Tables. The table must already exist. If the table is not in the user's schema, then the user must either use a synonym to reference the table or include the schema name as part of the table name for example, scott.

This means that the default schema will not necessarily be the one you specified in the connect string, if there are logon triggers present that get executed during connection to a database. Parent topic: Specifying Table Names.

That method overrides the global table-loading method. The following sections discuss using these options to load data into empty and nonempty tables. For more information, see Specifying Delimiters. All statements use the data definition language syntax described in the previous sections. The control file statements are presented in the approximate order they would appear in the control file.

Comments can appear anywhere in the command section of the file, but they should not appear within the data. Precede any comment with two hyphens. For example,. All text to the right of the double hyphen is ignored, until the end of the line. It also shows how the escape character is used in quoted strings.

The reserved words most likely to be column names are:. For example:. You must specify SQL strings within double quotation marks.

SQL strings cannot be used with column objects or collections, or attributes of column objects or collections. If you encounter problems when trying to specify a complete pathname, it may be due to an operating system-specific incompatibility caused by special characters in the specification. In many cases, specifying the pathname within single quotation marks prevents errors. If not, please see your operating system-specific documentation for possible solutions.

The same rule applies when single quotation marks are required in a string delimited by single quotation marks. Preceding the double quote with a backslash indicates that the double quote is to be taken literally:. Note: A double quote in the initial position cannot be escaped, therefore you should avoid creating strings with an initial quote. When converting to a different operating system, these strings will likely need to be modified. If your operating system uses the backslash character to separate directories in a pathname and if the version of Oracle running on your operating system implements the backslash escape character for filenames and other non-portable strings, then you must specify double backslashes in your pathnames and use single quotation marks.

Additional Information: Please see your Oracle operating system-specific documentation for information about which escape characters are required or allowed.

The version of Oracle running on your operating system may not implement the escape character for non-portable strings. When the escape character is disallowed, a backslash is treated as a normal character, rather than as an escape character although it is still usable in all other strings.

Then pathnames such as:. Because the backslash is not recognized as an escape character, strings within single quotation marks cannot be embedded inside another string delimited by single quotation marks.

This rule also holds for double quotation marks: A string within double quotation marks cannot be embedded inside another string delimited by double quotation marks. If your data is contained in the control file itself and not in a separate datafile, you must include it following the load configuration specifications. Case 1: Loading Variable-Length Data provides an example. To specify a file that contains the data to be loaded, use the INFILE keyword, followed by the filename and optional processing options string.

If no filename is specified, the filename defaults to the control filename with an extension or file type of DAT. This keyword specifies that a datafile specification follows.

Any spaces or punctuation marks in the filename must be enclosed in single quotation marks. See Specifying Filenames and Objects Names. If you have data in the control file as well as datafiles, you must specify the asterisk first in order for the data to be read.

It specifies the datafile format. It also optimizes datafile reads. See Specifying Datafile Format and Buffering. Datafiles need not have the same file processing options, although the layout of the records must be identical. For example, two files could be specified with completely different file processing options strings, and a third could consist of data in the control file. You can also specify a separate discard file and bad file for each datafile. However, the separate bad files and discard files must be declared after each datafile name.

For example, the following excerpt from a control file specifies four datafiles with separate bad and discard files:. DAT, neither a bad file nor a discard file is specified. Therefore, only the bad file is created, as needed. If created, the bad file has a default filename and extension. The discard file is not created, even if rows are discarded.

DAT, the default bad file is created, if needed. A discard file with the specified name mydat3. Note: Filenames that include spaces or punctuation marks must be enclosed in single quotation marks. For more details on filename specification, see Specifying Filenames and Objects Names. This keyword is used for direct path loads only. You use this string to specify file format and buffering. Additional Information: For details on the syntax of the file processing options string, see your Oracle operating system-specific documentation.

For example, suppose that your operating system has the following option-string syntax:. Note: This example uses the recommended convention of single quotation marks for filenames and double quotation marks for everything else. If you have specified that a bad file is to be created, the following applies:. If the bad file is created, it overwrites any existing file with the same name so insure that you do not overwrite a file you wish to retain.

Additional Information: On some systems a new version of the file is created if a file with the same name already exists. See your Oracle operating system-specific documentation to find out if this is the case on your system. If you do not specify a name for the bad file, the name defaults to the name of the datafile with an extension or file type of BAD. The bad file is created in the same record and file format as the datafile so that the data can be reloaded after corrections.

The syntax is. This keyword specifies that a filename for the badfile follows. That is, it cannot determine if the record meets WHEN-clause criteria, as in the case of a field that is missing its final delimiter. If the data can be evaluated according to the WHEN-clause criteria even with unbalanced delimiters then it is either inserted or rejected.

If a record is rejected on insert, then no part of that record is inserted into any table. For example, if data in a record is to be inserted into multiple tables, and most of the inserts succeed, but one insert fails, then all the inserts from that record are rolled back. The record is then written to the bad file, where it can be corrected and reloaded.

Previous inserts from records without errors are not affected. The log file indicates the Oracle error for each rejected record. Case 4: Loading Combined Physical Records demonstrates rejected records. This is to ensure that the row can be repaired in the bad file and reloaded to all tables consistently.

Also, if a row is loaded into one table, it should be loaded into all other tables which don't filter it out. Otherwise, reloading a fixed version of the row from the bad file could cause the data to be loaded into some tables twice. Data from LOB files or secondary data files are not written to a bad file when there are rejected rows. The records contained in this file are called discarded records.

Discarded records do not satisfy any of the WHEN clauses specified in the control file. These records differ from rejected records. Discarded records do not necessarily have any bad data. No insert is attempted on a discarded record. If no records are discarded, then a discard file is not created. Note that you can specify the discard file directly with a parameter specifying its name, or indirectly by specifying the maximum number of discards. This keyword specifies that a discard filename follows.

The default filename is the name of the datafile, and the default file extension or file type is DSC. A discard filename specified on the command line overrides one specified in the control file. If a discard file with that name already exists, it is either overwritten or a new version is created, depending on your operating system. The discard file is created with the same record and file format as the datafile.

So it can easily be used for subsequent loads with the existing control file, after changing the WHEN clauses or editing the data. A discard file named notappl with the file extension or file type of.

An attempt is made to insert every record into such a table. So records may be rejected, but none are discarded. Case 4: Loading Combined Physical Records provides an example of using a discard file.

Data from LOB files or secondary data files are not written to a discard file when there are discarded rows. When the discard limit is reached, processing of the datafile terminates and continues with the next datafile, if one exists. You can specify a different number of discards for each datafile. Alternatively, if the number of discards is only specified once, then the maximum number of discards specified applies to all files. Case 4: Loading Combined Physical Records provides an example.

A filename specified on the command line overrides any bad file that you may have specified in the control file. See the Oracle8i National Language Support Guide for information about supported character encoding schemes.

The following sections provide a brief introduction to some of the supported schemes. Multi-byte character sets support Asian languages. Data can be loaded in multi-byte format, and database objects fields, tables, and so on can be specified with multi-byte characters. In the control file, comments and object names may also use multi-byte characters. The session character set is the character set supported by your terminal.

During a direct path load, data converts directly into the database character set. The direct path load method, therefore, allows data in a character set that is not supported by your terminal to be loaded. Note: When data conversion is required, it is essential that the target character set contains a representation of all characters that exist in the data. Otherwise, characters that have no equivalent in the target character set are converted to a default character, with consequent loss of data.

When using the direct path, load method the database character set should be a superset of, or equivalent to, the datafile character sets. Similarly, during a conventional path load, the session character set should be a superset of, or equivalent to, the datafile character sets. Different datafiles can be specified with different character sets.

However, only one character set can be specified for each datafile. However, delimiters and comparison clause values must be specified to match the character set in use in the datafile.

To ensure that the specifications are correct, it may be preferable to specify hexadecimal strings, rather than character string values. Data that uses a different character set must be in a separate file. After the rows are successfully deleted, a commit is issued. You cannot recover the data that was in the table before the load, unless it was saved with Export or a comparable utility.

If data doesn't already exist, the new rows are simply loaded. All rows in the table are deleted and the new data is loaded. The row deletes cause any delete triggers defined on the table to fire. For more information on cascaded deletes, see the "Data Integrity" chapter of Oracle8i Concepts.

To update existing rows, use the following procedure:. Drop the work table. It requires the table to be empty before loading. For example, the table might reach its maximum number of extents.

Discontinued loads can be continued after more space is made available. When a load is discontinued, any data already loaded remains in the tables, and the tables are left in a valid state. If the conventional path is used, all indexes are left in a valid state. If the direct path load method is used, any indexes that run out of space are left in direct load state. They must be dropped before the load can continue. Other indexes are valid provided no other errors occurred.

See Indexes Left in Index Unusable State for other reasons why an index might be left in direct load state. Use this information to resume the load where it left off. Any indexes that are left in direct load state must be dropped before continuing the load.

The indexes can then be re-created either before continuing or after the load completes. To continue a discontinued direct or conventional path load involving only one table, specify the number of logical records to skip with the command-line parameter SKIP. It is not possible for multiple tables in a conventional path load to become unsynchronized.

So a multiple table conventional path load can also be continued with the command-line parameter SKIP. Use the same procedure that you would use for single-table loads, as described in the preceding paragraph.

If so, the tables are not synchronized and continuing the load is slightly more complex. If the numbers are the same, you can use the previously described simple continuation. These statements exist to handle unsynchronized interrupted loads. You must use the table-level SKIP clause. However, there may still be situations in which you may want to do so. You can create one logical record from multiple physical records using one of the following two clauses, depending on your data:.

For example, two records might be combined if there were a pound sign in character position 80 of the first record.

If any other character were there, the second record would not be added to the first. If the condition is false, then the current physical record becomes the last physical record of the current logical record.

THIS is the default. NEXT If the condition is true in the next record, then the current physical record is concatenated to the current record, continuing until the condition is false. Column numbers start with 1. Either a hyphen or a colon is acceptable start-end or start:end. If you omit end, the length of the continuation field is the length of the byte string or character string.

If you use end, and the length of the resulting continuation field is not the same as that of the byte string or the character string, the shorter one is padded. Character strings are padded with blanks, hexadecimal strings with zeroes. If the last non-blank character in the current physical record meets the test, then the next physical record is read and concatenated to the current physical record, continuing until the condition is false.

If the condition is false in the current record, then the current physical record is the last physical record of the current logical record. For the equal operator, the field and comparison string must match exactly for the condition to be true. For the not equal operator, they may differ in any character. The string must be enclosed in double or single quotation marks. The comparison is made character by character, blank padding on the right if necessary. X'hex-string' A string of bytes in hexadecimal format used in the same way as the character string described above.

X'1FB would represent the three bytes with values 1F, b , and 33 hex. This is the only time you refer to character positions in physical records. All other references are to logical records. This allows data values to span the records with no extra characters continuation characters in the middle. Trailing blanks in the physical records are part of the logical records. You cannot fragment records in secondary datafiles SDFs into multiple physical records.

Then the next physical record record2 should be appended to it. If record2 also has an asterisk in column 1, then record3 is appended also. If record2 does not have an asterisk in column 1, then it is still appended to record1, but record3 begins a new logical record. In the next example, you specify that if the current physical record record1 has a comma in the last non-blank data column.

If a record does not have a comma in the last column, it is the last physical record of the current logical record. In the last example, you specify that if the next physical record record2 has a "10" in columns 7 and 8.

Then it should be appended to the preceding physical record record1. If a record does not have a "10" in columns 7 and 8, then it begins a new logical record. It defines the relationship between records in the datafile and tables in the database. The specification of fields and datatypes is described in later sections. The table must already exist. Otherwise, the table name should be prefixed by the username of the owner as follows:.

It is only valid for a parallel load. For more information, see Parallel Data Loading Models. You can choose to load or discard a logical record by using the WHEN clause to test a condition in the record.

The WHEN clause appears after the table name and is followed by one or more field conditions. For example, the following clause indicates that any record with the value "q" in the fifth column position should be loaded:.

Parentheses are optional, but should be used for clarity with multiple comparisons joined by AND. For example. Then the WHEN clause is evaluated.

A row is inserted into the table only if the WHEN clause is true. To declare a file named mydata. For details on the syntax of the file processing options string, see your Oracle operating system-specific documentation. This example uses the recommended convention of single quotation marks for filenames and double quotation marks for everything else.

If you have specified that a bad file is to be created, the following applies:. On some systems, a new version of the file is created if a file with the same name already exists. See your Oracle operating system-specific documentation to find out if this is the case on your system. If you do not specify a name for the bad file, the name defaults to the name of the datafile with an extension or file type of.

The bad file is created in the same record and file format as the datafile so that the data can be reloaded after making corrections. For datafiles in stream record format, the record terminator that is found in the datafile is also used in the bad file. To specify a bad file with filename foo and default file extension or file type of. To specify a bad file with filename bad and file extension or file type of. If there is an error loading a LOB, the row is not rejected.

Rather, the LOB column is left empty not null with a length of zero 0 bytes. If the data can be evaluated according to the WHEN clause criteria even with unbalanced delimiters , then it is either inserted or rejected. Neither a conventional path nor a direct path load will write a row to any table if it is rejected because of reason number 2 in the previous list.

Additionally, a conventional path load will not write a row to any tables if reason number 1 or 3 in the previous list is violated for any one table. The row is rejected for that table and written to the reject file. The log file indicates the Oracle error for each rejected record.

The records contained in this file are called discarded records. Discarded records do not satisfy any of the WHEN clauses specified in the control file. These records differ from rejected records. Discarded records do not necessarily have any bad data. No insert is attempted on a discarded record. You can specify the discard file directly by specifying its name, or indirectly by specifying the maximum number of discards. The discard file is created in the same record and file format as the datafile.

For datafiles in stream record format, the same record terminator that is found in the datafile is also used in the discard file. The default filename is the name of the datafile, and the default file extension or file type is. A discard filename specified on the command line overrides one specified in the control file.

If a discard file with that name already exists, it is either overwritten or a new version is created, depending on your operating system. A filename specified on the command line overrides any discard file that you may have specified in the control file. The following list shows different ways you can specify a name for the discard file from within the control file:. An attempt is made to insert every record into such a table. Therefore, records may be rejected, but none are discarded.

You can limit the number of records to be discarded for each datafile by specifying an integer:. When the discard limit specified with integer is reached, processing of the datafile terminates and continues with the next datafile, if one exists. You can specify a different number of discards for each datafile. Or, if you specify the number of discards only once, then the maximum number of discards specified applies to all files. Oracle9i Database Globalization Support Guide.

The fastest way to load shift-sensitive character data is to use fixed-position fields without delimiters. To improve performance, remember the following points:. The following sections provide a brief introduction to some of the supported character encoding schemes. Multibyte character sets support Asian languages. Data can be loaded in multibyte format, and database object names fields, tables, and so on can be specified with multibyte characters.

In the control file, comments and object names can also use multibyte characters. Unicode is a universal encoded character set that supports storage of information from most languages in a single character set. Unicode provides a unique code value for every character, regardless of the platform, program, or language.

A character in UTF-8 can be 1 byte, 2 bytes, or 3 bytes long. Multibyte fixed-width character sets for example, AL16UTF16 are not supported as the database character set. This alternative character set is called the database national character set.

Only Unicode character sets are supported as the database national character set. However, the Oracle database server supports only UTF encoding with big endian byte ordering AL16UTF16 and only as a database national character set, not as a database character set. When data character set conversion is required, the target character set should be a superset of the source datafile character set.

Otherwise, characters that have no equivalent in the target character set are converted to replacement characters, often a default character such as a question mark? This causes loss of data. If they are specified in bytes, and data character set conversion is required, the converted values may take more bytes than the source values if the target character set uses more bytes than the source character set for any character that is converted.

This will result in the following error message being reported if the larger target value exceeds the size of the database column:. You can avoid this problem by specifying the database column size in characters and by also using character sizes in the control file to describe the data. Another way to avoid this problem is to ensure that the maximum column size is large enough, in bytes, to hold the converted value.

Normally, the specified name must be the name of an Oracle-supported character set. However, because you are allowed to set up data using the byte order of the system where you create the datafile, the data in the datafile can be either big endian or little endian. Therefore, a different character set name UTF16 is used. It is possible to specify different character sets for different input datafiles. If the control file character set is different from the datafile character set, keep the following issue in mind.

To ensure that the specifications are correct, you may prefer to specify hexadecimal strings, rather than character string values. If hexadecimal strings are used with a datafile in the UTF Unicode encoding, the byte order is different on a big endian versus a little endian system. For example, "," comma in UTF on a big endian system is X'c'. On a little endian system it is X'2c00'. This allows the same syntax to be used in the control file on both a big endian and a little endian system.

For example, the specification CHAR 10 in the control file can mean 10 bytes or 10 characters. These are equivalent if the datafile uses a single-byte character set. However, they are often different if the datafile uses a multibyte character set. To avoid insertion errors caused by expansion of character strings during character set conversion, use character-length semantics in both the datafile and the target database columns.

Byte-length semantics are the default for all datafiles except those that use the UTF16 character set which uses character-length semantics by default. It is possible to specify different length semantics for different input datafiles. The following datatypes use byte-length semantics even if character-length semantics are being used for the datafile, because the data is binary, or is in a special binary-encoded form in the case of ZONED and DECIMAL:.

This is necessary to handle datafiles that have a mix of data of different datatypes, some of which use character-length semantics, and some of which use byte-length semantics.

The SMALLINT length field takes up a certain number of bytes depending on the system usually 2 bytes , but its value indicates the length of the character string in characters. Character-length semantics in the datafile can be used independent of whether or not character-length semantics are used for the database columns. Therefore, the datafile and the database columns can use either the same or different length semantics.



0コメント

  • 1000 / 1000