As usual, the interplay of RAID strip/RAID stripes/filesystem block size/extent size/tablespace block or page size can be a complex matter. There is a tiny titbit on the RAID/extent relationship in the official documentation that largely goes unnoticed. I will highlight the interesting bit here:
Quote:"Do not set the DB2_USE_PAGE_CONTAINER_TAG registry variable. As described earlier, you should create a table space with an extent size that is equal to, or a multiple of, the RAID stripe size. However, when you set DB2_USE_PAGE_CONTAINER_TAG to ON, a one-page container tag is used, and the extents do not line up with the RAID stripes. "
Link to the documentation:
http://pic.dhe.ibm.com/infocenter/db2luw/v9r7/index.jsp?topic=%2Fcom.ibm.db2.luw.admin.dbobj.doc%2Fdoc%2Fc0052067.html
Saturday, May 25, 2013
File System Caching Info
Other than using the OS mount option, file system caching can be disabled/enabled during tablespace creation time. As important as this option is, it is not being shown in the standard 'list tablespaces' output.
To get this info, you will have to resort to this:
GET SNAPSHOT FOR TABLESPACES ON <db name>
A sample output is shown below. Note the line with 'File system caching' ( highlighted in italic bold font). For this example, it is 'No' for USERSPACE1:
Tablespace name = USERSPACE1
Tablespace ID = 2
Tablespace Type = Database managed space
Tablespace Content Type = All permanent data. Large table space.
Tablespace Page size (bytes) = 4096
Tablespace Extent size (pages) = 32
Automatic Prefetch size enabled = Yes
Buffer pool ID currently in use = 1
Buffer pool ID next startup = 1
Using automatic storage = Yes
Auto-resize enabled = Yes
File system caching = No
Tablespace State = 0x'00000000'
Detailed explanation:
Normal
Tablespace Prefetch size (pages) = 32
For tablespaces created in v9.7 and later, NO FILE SYSTEM CACHING is the default. Before deciding whether caching is good for the system or not, make sure you understand the characteristics of the DB ( the deciding factor would involve the presence of inlined/non-inlined LOB).
To get this info, you will have to resort to this:
GET SNAPSHOT FOR TABLESPACES ON <db name>
A sample output is shown below. Note the line with 'File system caching' ( highlighted in italic bold font). For this example, it is 'No' for USERSPACE1:
Tablespace name = USERSPACE1
Tablespace ID = 2
Tablespace Type = Database managed space
Tablespace Content Type = All permanent data. Large table space.
Tablespace Page size (bytes) = 4096
Tablespace Extent size (pages) = 32
Automatic Prefetch size enabled = Yes
Buffer pool ID currently in use = 1
Buffer pool ID next startup = 1
Using automatic storage = Yes
Auto-resize enabled = Yes
File system caching = No
Tablespace State = 0x'00000000'
Detailed explanation:
Normal
Tablespace Prefetch size (pages) = 32
For tablespaces created in v9.7 and later, NO FILE SYSTEM CACHING is the default. Before deciding whether caching is good for the system or not, make sure you understand the characteristics of the DB ( the deciding factor would involve the presence of inlined/non-inlined LOB).
Wednesday, May 22, 2013
A 'Better' Execution Plan Graph
One crucial number that is missing from the standard plan generated by 'EXPLAIN PLAN' is the actual rows flowing in each node of the graph. Without it, it is close to impossible to judge the efficiency of the plan. How do you decide if the 'optimizer has underestimated/overestimated the row source' at any point of the execution?
A pretty neat feature which first appeared in v9.7 and unfortunately still largely unknown in v10.1, is the 'section actuals', which can be set at the database level as SECTION_ACTUALS, or at the individual application using WLM_SET_CONN_ENV.
You will also need to capture the stats using activity event monitor, eg: ALTER WORKLOAD WL1 COLLECT ACTIVITY DATA WITH DETAILS,SECTION, etc etc ....
But assuming that all you want to do is to do simple test on certain query, there is an easier way. Hidden somewhere in the sea of binaries is a utility called 'db2caem'. It's really easy to use.
Let's illustrate this using a simple example. Assuming that you want to test your query 'select count(*) from t1', all you need to do is issue this:
db2caem -d db2test -st "select count(*) from t1"
____________________________________________________________________
_____ D B 2 C A E M _____
DB2 Capture Activity Event Monitor data tool
I B M
The DB2CAEM Tool is a utility for capturing the activity event
monitor data with details, section and values, as well as actuals.
which could be used when analyze the SQL statement performance.
____________________________________________________________________
________________________________________________________________________________
DBT7041I "db2caem": The db2caem utility is connecting to the following database: "db2test".
DBT7038I "db2caem": The db2caem utility successfully connected to the following database: "db2test".
DBT7042I "db2caem": The SQL statement "select count(*) from t1" is being issued.
DBT7043I "db2caem": The db2caem utility is disconnecting from the following database: "db2test".
DBT7039I "db2caem": The db2caem utility successfully disconnected from the following database: "db2test".
DBT7000I db2caem completed. Output path: "C:\\DB2CAEM_2013-05-22-16.58.09.762000".
That's pretty much to it. The execution plan you want is in the directory ( as shown on the last line) "C:\\DB2CAEM_2013-05-22-16.58.09.762000"
Look for the file named 'db2caem.exfmt' inside, and you can see something like below:
-------------------------------
< some stuff removed for clarity>
Original Statement:
------------------
select
count(*)
from
t1
Optimized Statement:
-------------------
SELECT
Q3.$C0
FROM
(SELECT
COUNT(*)
FROM
(SELECT
$RID$
FROM
XXXXXX.T1 AS Q1
) AS Q2
) AS Q3
Explain level: Explain from section
Access Plan:
-----------
Total Cost: 78.1959
Query Degree: 1
Rows
Rows Actual
RETURN
( 1)
Cost
I/O
|
1
1
GRPBY
( 2)
78.1957
NA
|
10001
10001
IXSCAN
( 3)
77.8301
NA
|
-1
NA
INDEX: XXXXXXX
T1X2
Q1
<more stuff removed> ...
Note that now there are *two* numbers, one is the standard estimated rows, and the one immediately below it is the *actual* rows.
Easy eh? :)
While this is all good, unfortunately the interpretation of the numbers can be very confusing for certain execution plan. I shall leave that for another post ...
A pretty neat feature which first appeared in v9.7 and unfortunately still largely unknown in v10.1, is the 'section actuals', which can be set at the database level as SECTION_ACTUALS, or at the individual application using WLM_SET_CONN_ENV.
You will also need to capture the stats using activity event monitor, eg: ALTER WORKLOAD WL1 COLLECT ACTIVITY DATA WITH DETAILS,SECTION, etc etc ....
But assuming that all you want to do is to do simple test on certain query, there is an easier way. Hidden somewhere in the sea of binaries is a utility called 'db2caem'. It's really easy to use.
Let's illustrate this using a simple example. Assuming that you want to test your query 'select count(*) from t1', all you need to do is issue this:
db2caem -d db2test -st "select count(*) from t1"
____________________________________________________________________
_____ D B 2 C A E M _____
DB2 Capture Activity Event Monitor data tool
I B M
The DB2CAEM Tool is a utility for capturing the activity event
monitor data with details, section and values, as well as actuals.
which could be used when analyze the SQL statement performance.
____________________________________________________________________
________________________________________________________________________________
DBT7041I "db2caem": The db2caem utility is connecting to the following database: "db2test".
DBT7038I "db2caem": The db2caem utility successfully connected to the following database: "db2test".
DBT7042I "db2caem": The SQL statement "select count(*) from t1" is being issued.
DBT7043I "db2caem": The db2caem utility is disconnecting from the following database: "db2test".
DBT7039I "db2caem": The db2caem utility successfully disconnected from the following database: "db2test".
DBT7000I db2caem completed. Output path: "C:\\DB2CAEM_2013-05-22-16.58.09.762000".
That's pretty much to it. The execution plan you want is in the directory ( as shown on the last line) "C:\\DB2CAEM_2013-05-22-16.58.09.762000"
Look for the file named 'db2caem.exfmt' inside, and you can see something like below:
-------------------------------
< some stuff removed for clarity>
Original Statement:
------------------
select
count(*)
from
t1
Optimized Statement:
-------------------
SELECT
Q3.$C0
FROM
(SELECT
COUNT(*)
FROM
(SELECT
$RID$
FROM
XXXXXX.T1 AS Q1
) AS Q2
) AS Q3
Explain level: Explain from section
Access Plan:
-----------
Total Cost: 78.1959
Query Degree: 1
Rows
Rows Actual
RETURN
( 1)
Cost
I/O
|
1
1
GRPBY
( 2)
78.1957
NA
|
10001
10001
IXSCAN
( 3)
77.8301
NA
|
-1
NA
INDEX: XXXXXXX
T1X2
Q1
<more stuff removed> ...
Note that now there are *two* numbers, one is the standard estimated rows, and the one immediately below it is the *actual* rows.
Easy eh? :)
While this is all good, unfortunately the interpretation of the numbers can be very confusing for certain execution plan. I shall leave that for another post ...
Thursday, April 11, 2013
Optimization Guidelines - The Easy way
To use this, there is one very important step, which is:
===> set the registry DB2_OPTPROFILE to YES.
Interestingly, this registry is not visible in the 'db2set -lr' output.
Once this is done, you are good to go. All the available 'guidelines' you have are available in a section of the online documentation called 'Current Optimization Schema', which is in XML format. For the example here, I'm going to use the guideline the 'easy way'.
Let's say we have a table M1 and M2, both with columns C1 and C2. We want to join them M1.C1 = M2.C1 using NL join. Let's jump straight into the usage:
SELECT <something>
from
M2 M2 JOIN M1 M1
ON ( M1.C1 = M2.C1)
/*<OPTGUIDELINES>
<NLJOIN>
<IXSCAN TABLE="M1"/>
<TBSCAN TABLE="M2"/>
</NLJOIN>
</OPTGUIDELINES>*/
Note that you start the guideline at the *end* of your statement with /*<OPTGUIDELINES>, and end the guidelines with </OPTGUIDELINES>*/. Anything in between these tags are your guidelins to the database. In this example, I instructed the database to use NL join for the join of table M1 and M2, using index scan (IXSCAN) for table M1, and table scan (TBSCAN) for table M2.
Personally, I use it to understand the WHAT-IF scenario if the execution plan changes in certain manner. This short-cut way to specify the guidelines makes testing very straightforward.
===> set the registry DB2_OPTPROFILE to YES.
Interestingly, this registry is not visible in the 'db2set -lr' output.
Once this is done, you are good to go. All the available 'guidelines' you have are available in a section of the online documentation called 'Current Optimization Schema', which is in XML format. For the example here, I'm going to use the guideline the 'easy way'.
Let's say we have a table M1 and M2, both with columns C1 and C2. We want to join them M1.C1 = M2.C1 using NL join. Let's jump straight into the usage:
SELECT <something>
from
M2 M2 JOIN M1 M1
ON ( M1.C1 = M2.C1)
/*<OPTGUIDELINES>
<NLJOIN>
<IXSCAN TABLE="M1"/>
<TBSCAN TABLE="M2"/>
</NLJOIN>
</OPTGUIDELINES>*/
Note that you start the guideline at the *end* of your statement with /*<OPTGUIDELINES>, and end the guidelines with </OPTGUIDELINES>*/. Anything in between these tags are your guidelins to the database. In this example, I instructed the database to use NL join for the join of table M1 and M2, using index scan (IXSCAN) for table M1, and table scan (TBSCAN) for table M2.
Personally, I use it to understand the WHAT-IF scenario if the execution plan changes in certain manner. This short-cut way to specify the guidelines makes testing very straightforward.
Tuesday, April 9, 2013
Datatype Space Usage: SMALLINT, BIGINT, etc
When a datatype (such as BIGINT) is chosen for a column, is space being allocated based on the 'value' , or is it a fixed allocation regardless of the value?
If a value of '5' being insert into the table, does it consume the same amount of space compared to value '9876543' ? Or does a '5' occupy less space than '9876543'?
This is what I mean:
If a value of '5' being insert into the table, does it consume the same amount of space compared to value '9876543' ? Or does a '5' occupy less space than '9876543'?
Let's find out. First, we create a single column table with type BIGINT. Then we insert roughly 150000 rows of value '1' into it.
This is the space used:
TAB NPAGES
--------------- --------------------
BIGINT 771
Let's repeat this test using value '99999999' instead. This is the space used:
TAB NPAGES
--------------- --------------------
BIGINT 771
That suggests there is no difference.
That also means, if you know the range of the values ( number type, that is) for the column, you should pick the smallest possible datatype that would fit the range.
Below is a table showing the range for each numerical type:
Limit | |
---|---|
Smallest SMALLINT value | -32 768 |
Largest SMALLINT value | +32 767 |
Smallest INTEGER value | -2 147 483 648 |
Largest INTEGER value | +2 147 483 647 |
Smallest BIGINT value | -9 223 372 036 854 775 808 |
Largest BIGINT value | +9 223 372 036 854 775 807 |
Largest decimal precision | 31 |
Maximum exponent (Emax) for REAL values | 38 |
Smallest REAL value | -3.402E+38 |
Largest REAL value | +3.402E+38 |
Minimum exponent (Emin) for REAL values | -37 |
Smallest positive REAL value | +1.175E-37 |
Largest negative REAL value | -1.175E-37 |
Maximum exponent (Emax) for DOUBLE values | 308 |
Smallest DOUBLE value | -1.79769E+308 |
Largest DOUBLE value | +1.79769E+308 |
Minimum exponent (Emin) for DOUBLE values | -307 |
Smallest positive DOUBLE value | +2.225E-307 |
Largest negative DOUBLE value | -2.225E-307 |
Maximum exponent (Emax) for DECFLOAT(16) values | 384 |
Smallest DECFLOAT(16) value1 | -9.999999999999999E+384 |
Largest DECFLOAT(16) value | 9.999999999999999E+384 |
Minimum exponent (Emin) for DECFLOAT(16) values | -383 |
Smallest positive DECFLOAT(16) value | 1.000000000000000E-383 |
Largest negative DECFLOAT(16) value | -1.000000000000000E-383 |
Maximum exponent (Emax) for DECFLOAT(34) values | 6144 |
Smallest DECFLOAT(34) value1 | -9.999999999999999999999999999999999E+6144 |
Largest DECFLOAT(34) value | 9.999999999999999999999999999999999E+6144 |
Minimum exponent (Emin) for DECFLOAT(34) values | -6143 |
Smallest positive DECFLOAT(34) value | 1.000000000000000000000000000000000E-6143 |
Largest negative DECFLOAT(34) value | -1.000000000000000000000000000000000E-6143 |
Tuesday, March 26, 2013
DB2 Subquery Removal Test 1
Does DB2 do subquery removal? Let's do a test and find out:
We will use a simplified TPC-H query, and faking 100000 rows for both tables:
Original Statement:
------------------
SELECT
SUM(l_extendedprice) / 7.0 AS avg_yearly
FROM
lineitem,
part
WHERE
p_partkey = l_partkey AND
p_brand = '[BRAND]' AND
p_container = '[CONTAINER]' AND
l_quantity <
(SELECT
0.2 * AVG(l_quantity)
FROM
lineitem
WHERE
l_partkey = p_partkey
)
The outcome? See the optimized statement:
Optimized Statement:
-------------------
SELECT
(Q7.$C0 / 7.0) AS "AVG_YEARLY"
FROM
(SELECT
SUM(DECFLOAT(Q6.L_EXTENDEDPRICE, 34))
FROM
(SELECT
Q5.L_EXTENDEDPRICE
FROM
(SELECT
SUM(DECFLOAT(Q2.L_QUANTITY, 34)),
COUNT(DECFLOAT(Q2.L_QUANTITY, 34))
FROM
(SELECT
Q1.L_QUANTITY
FROM
LINEITEM AS Q1
WHERE
(Q1.L_PARTKEY = Q4.P_PARTKEY)
) AS Q2
) AS Q3,
PART AS Q4,
LINEITEM AS Q5
WHERE
(Q5.L_QUANTITY < (0.2 * (Q3.$C0 / Q3.$C1))) AND
(Q4.P_CONTAINER = '[CONTAINER]') AND
(Q4.P_BRAND = '[BRAND]') AND
(Q4.P_PARTKEY = Q5.L_PARTKEY)
) AS Q6
) AS Q7
Notice that the subquery has vanished, but LINEITEM Q1 is still being used as a lateral view.
Can it do better than this? Can it reduce the access to LINEITEM from 2 to 1?
More on this later ....
Wednesday, March 6, 2013
"<> ALL" Multi-column Problem
For single column, everything is fine, but when you try multi-column set comparison, 'it' gets confused.
Let's start with a single column comparison:
db2 => select * from no_null where c1 <> all ( select c1 from some_null) ;
C1 C2
------------------------ ------------------------
0 record(s) selected.
So far so good.
Now, let's try two columns, c1 and c2:
db2 => select * from no_null where (c1, c2) <>ALL ( select c1, c2 from some_null) ;
SQL0104N An unexpected token "(" was found following "". Expected tokens may
include: "". SQLSTATE=42601
Let's start with a single column comparison:
db2 => select * from no_null where c1 <> all ( select c1 from some_null) ;
C1 C2
------------------------ ------------------------
0 record(s) selected.
So far so good.
Now, let's try two columns, c1 and c2:
db2 => select * from no_null where (c1, c2) <>ALL ( select c1, c2 from some_null) ;
SQL0104N An unexpected token "(" was found following "". Expected tokens may
include: "". SQLSTATE=42601
Fun eh? ;)
Subscribe to:
Posts (Atom)