ASE15: Hints to improve batch doing hundreds of thousands selects and thousands updates/deletes

Adaptive Server Enterprise/15.0.3/EBF 16548
ESD#1/P/Sun_svr4/OS 5.8/ase1503/2680/64-bit/FBO/Thu Mar  5
09:39:28 2009

Good evening everybody,

I posted a question about a month ago regarding a batch
performance (ASE15 on Solaris 10).
The batch was doing about 100000 inserts/updates and average
same amount on select. It's using a unique Session and
Commit each inserts/updates.
I was in charge of improving its execution and after few
runs I improved it by setting the Statement Cache.

But the improvement for this batch didn't help for other and
today I don't have any ideas.

My batch is roughly doing 450000 selects and 75000
inserts/updates. Most of the queries are simple select (no
joint) and simple update

Regarding my dataserver configuration:
Page size is 8K
8 engines
Default datacache of 6.2Gb (8K pool=4500.00 Mb(wash size
61440); 64K pool=1700.00 Mb(wash size)45760 Kb)
Log cache of 200Mb (8K pool=200 Mb(wash size 8192 Kb)),
my_database..syslogs binds to it.
Tempdb cache of 500Mb (8K pool=350Mb(wash size=61440 Kb);
16K pool=150Mb(wash size=30720 Kb))
Optimization goal is allrows_oltp
no worker process

I first try to tune dataserver by:
Adding a 16K pool in my log cache and changing my logIO size
without success
adding the option 'async logging service on my db' without
success.

During each test, I ran sp_sysmon (15 mins) and the output
seems correct but the following makes me think the
bottleneck is on the LOG io

Last Log Page Writes             22.8           1.0      
20489       2.9 %

  Transaction Detail              per sec      per xact     
 count  % of total
  -------------------------  ------------  ------------ 
----------  ----------
    Inserts
      APL Heap Table               1532.0          66.7    
1378757      99.5 %

Total Rows Updated               22.3           1.0      
20033       1.4 %
Total DOL Rows Updated           22.3           1.0      
20033       1.4 %

  ULC Flushes to Xact Log         per sec      per xact     
 count  % of total
  -------------------------  ------------  ------------ 
----------  ----------
    by Full ULC                       0.1           0.0     
    64       0.3 %
    by End Transaction               22.8           1.0     
 20500      98.1 %
Avg # Writes per Log Page           n/a           n/a   
12.78155       n/a

Total Last Pg Locks              1532.0          66.7    
1378757     100.0 %

  SQL Statement Cache:
    Statements Cached                 0.0           0.0     
     8       n/a
    Statements Found in Cache       177.7           7.7     
159942       n/a
    Statements Not Found              0.0           0.0     
     8       n/a
    Statements Dropped                0.0           0.0     
     0       n/a
    Statements Restored               0.0           0.0     
     1       n/a
    Statements Not Cached             0.0           0.0     
     0       n/a

  Device:
    /SYBASE15/LOG/IDGN_INT_LOG.dat
    IDGN_INT_LOG                  per sec      per xact     
 count  % of total
  -------------------------  ------------  ------------ 
----------  ----------
    Reads
      APF                             0.0           0.0     
     2       0.0 %
      Non-APF                         0.0           0.0     
     7       0.0 %
    Writes                           23.3           1.0     
 20942     100.0 %
  -------------------------  ------------  ------------ 
----------  ----------
  Total I/Os                         23.3           1.0     
 20951      61.6 %


I didn't query any MDA tables during my tests.

Then I look at the showplan of the top 11 queries by number
of execution (from 15000 fo 140000) and didn't see any gap
in optimizer estimation (set statistics plancost), any
missing indexes and the number of LIO for each of them is
less than 5.
(by the way each showplan told me 'Executing a newly cached
statement (SSQL_ID = 474133198).' why isn't it using and
existing pal as i run this batch about 10 times today and my
statement cache doesn't seem to be full)
1> select * from master..monStatementCache
2> go
 TotalSizeKB UsedSizeKB  NumStatements NumSearches HitCount 
  NumInserts  NumRemovals NumRecompilesSchemaChanges
NumRecompilesPlanFlushes
 ----------- ----------- ------------- -----------
----------- ----------- -----------
-------------------------- ------------------------
      409600      203816          1222     3080582    
3079675         868         748                          5  
                     2


Now If you have any hints after reading this message (thanks
for your patience), I thank all of you thousands of times in
advance.

Simon

ps: I can post the sp_sysmon if needed
0
Simon
8/26/2010 6:16:37 PM
sybase.ase.performance+tuning 2395 articles. 0 followers. Follow

5 Replies
690 Views

Similar Articles

[PageSpeed] 56


> It's using a unique Session and Commit each inserts/updates.

I'm not sure what you mean by 'unique Session' ... are you logging out and back in for each insert/update?

What does sp_sysmon show for 'Connections Opened'?

As for the sp_sysmon ouptut, a couple items caught my eye ...

>   ULC Flushes to Xact Log         per sec      per xact     
>  count  % of total
>   -------------------------  ------------  ------------ 
> ----------  ----------
>     by Full ULC                       0.1           0.0     
>     64       0.3 %
>     by End Transaction               22.8           1.0     
>  20500      98.1 %
> Avg # Writes per Log Page           n/a           n/a   
> 12.78155       n/a

98.1% of your ULC flushes are due to the completion of a transaction.

The 'Avg # Writes per Log Page' is 12.78155.  This means that you were performing (roughly) 13 separate transactions, 
all of which fit on the same log page.

These 2 items tend to imply that you're processing your inserts/updates in small batches (perhaps one at a time?).

Have you considered wrapping groups of inserts/updates in a transaction wrapper?  The idea is to stuff several record 
modifications into a single log write.  This should show up as a lower number for 'Avg # Writes per Log Page'.

The grouping of several inserts/updates into a single transaction wrapper also cuts down on the number of physical 
writes to the log device.  For a user database where 'delayed commit' is disabled, this can mean a big savings in 
overall run time for your process since you'll be waiting a lot less time for the 'slow' disk writes to complete.  Even 
if your database has 'delayed commit' enabled (eg, temporary database), the lower number of log writes means less of a 
workload (ie, fewer disk IOs) on your log device.

How many rows to stuff into a single transaction is up to you, ie, you'll want to experiment some.

Some items to take into consideration ...

- the more records you stuff into a transaction wrapper the more locks you stand to obtain, which increases your chances 
of blocking someone else trying to access those same records; you'll also want to keep an eye on the max number of locks 
your dataserver can manage [probably not an issue with just a few 100/1000 inserts/updates]

- if your queries have to perform a lot of disk IOs (eg, read data from disk into cache) then your transaction will be 
held open longer, which in turn increases your chances of blocking someone else with one of your locks

- if you hold the transaction open too long you can delay the timely truncation of your transaction log, which in turn 
can increase your chances of filling up the log [most likely not an issue if you're grouping a few 100/1000 
inserts/updates and most/all pages are in cache, ie, each transaction should only take a few seconds at most to complete]

- as you increase the volume of inserts/updates in a single transaction wrapper you increase the likelihood of 
performing ULC flushes due to 'Full ULC'; while this isn't the kiss of death, you may want to also consider 
reconfiguring your dataserver to use a larger ULC in order to reduce the number of flushes due to 'Full ULC'

The general idea for wrapping several inserts/updates in a single transaction:

===========================
declare @loop_counter int
select  @loop_counter = 0

-- start the first txn wrapper
begin tran

while <have_rows_to_process>
begin
         select @loop_counter = @loop_counter + 1

         <insert_and/or_update>

         -- adjust following modulo test based on
         -- number of records you want in a single
         -- transaction wrapper

         if @loop_counter % 100 = 0
         begin
                 -- close out the current txn wrapper
                 commit tran

                 -- then open a new txn wrapper
                 begin tran
         end
end

-- close out the last txn wrapper
commit tran
==========================

Make sure you add error checking and rollback logic as you see fit.

>   SQL Statement Cache:
>     Statements Cached                 0.0           0.0     
>      8       n/a
>     Statements Found in Cache       177.7           7.7     
> 159942       n/a
>     Statements Not Found              0.0           0.0     
>      8       n/a
>     Statements Dropped                0.0           0.0     
>      0       n/a
>     Statements Restored               0.0           0.0     
>      1       n/a
>     Statements Not Cached             0.0           0.0     
>      0       n/a

.... snip ...

> (by the way each showplan told me 'Executing a newly cached
> statement (SSQL_ID = 474133198).' why isn't it using and
> existing pal as i run this batch about 10 times today and my
> statement cache doesn't seem to be full)

sp_sysmon is showing that the dataserver found the vast majority of your queries in statement cache, to the tune of 
177.7 per second ... that's good.

As for the message about 'Executing a newly cached statement' ... *shrug* ... how many times do you actually see this 
message?
0
Mark
8/26/2010 7:45:50 PM
Hello Mark,
First of all, I don't have any way to modify this batch, AS the PROD 
upgrade is scheduled in two weeks I have little possibility to upgrade 
my batch

As far as I can see every time we select/insert/update data we connect 
to the DB but as it's donne serialy, this is what I meant by unique 
session.

Secondly, as this batch is a nightly batch without any users connected, 
I'm not concerned by concurrency

Regarding your last question, maybe my test wasn't a good one. 
Basically what I did today is running the batch several time (by the 
way 'enable literal autoparam' is set). Then thanks to awk/sed I was 
able to isolate my top SQL queries (11) and get the plan with  'set 
showplan on'; as my batch was run several times I was supposed to find 
the plan for each of them in the Stmt Cache but the message make me 
think the opposite...

Maybe I should execute each query several time but it this case my Stmt 
Cache is not usefull as it should be.


Simon
>
>> It's using a unique Session and Commit each inserts/updates.
>
> I'm not sure what you mean by 'unique Session' ... are you logging out and 
> back in for each insert/update?
>
> What does sp_sysmon show for 'Connections Opened'?
>
> As for the sp_sysmon ouptut, a couple items caught my eye ...
>
>>   ULC Flushes to Xact Log         per sec      per xact      count  % of 
>> total
>>   -------------------------  ------------  ------------ ----------  
>> ----------
>>     by Full ULC                       0.1           0.0         64       
>> 0.3 %
>>     by End Transaction               22.8           1.0      20500      
>> 98.1 %
>> Avg # Writes per Log Page           n/a           n/a   12.78155       n/a
>
> 98.1% of your ULC flushes are due to the completion of a transaction.
>
> The 'Avg # Writes per Log Page' is 12.78155.  This means that you were 
> performing (roughly) 13 separate transactions, all of which fit on the same 
> log page.
>
> These 2 items tend to imply that you're processing your inserts/updates in 
> small batches (perhaps one at a time?).
>
> Have you considered wrapping groups of inserts/updates in a transaction 
> wrapper?  The idea is to stuff several record modifications into a single log 
> write.  This should show up as a lower number for 'Avg # Writes per Log 
> Page'.
>
> The grouping of several inserts/updates into a single transaction wrapper 
> also cuts down on the number of physical writes to the log device.  For a 
> user database where 'delayed commit' is disabled, this can mean a big savings 
> in overall run time for your process since you'll be waiting a lot less time 
> for the 'slow' disk writes to complete.  Even if your database has 'delayed 
> commit' enabled (eg, temporary database), the lower number of log writes 
> means less of a workload (ie, fewer disk IOs) on your log device.
>
> How many rows to stuff into a single transaction is up to you, ie, you'll 
> want to experiment some.
>
> Some items to take into consideration ...
>
> - the more records you stuff into a transaction wrapper the more locks you 
> stand to obtain, which increases your chances of blocking someone else trying 
> to access those same records; you'll also want to keep an eye on the max 
> number of locks your dataserver can manage [probably not an issue with just a 
> few 100/1000 inserts/updates]
>
> - if your queries have to perform a lot of disk IOs (eg, read data from disk 
> into cache) then your transaction will be held open longer, which in turn 
> increases your chances of blocking someone else with one of your locks
>
> - if you hold the transaction open too long you can delay the timely 
> truncation of your transaction log, which in turn can increase your chances 
> of filling up the log [most likely not an issue if you're grouping a few 
> 100/1000 inserts/updates and most/all pages are in cache, ie, each 
> transaction should only take a few seconds at most to complete]
>
> - as you increase the volume of inserts/updates in a single transaction 
> wrapper you increase the likelihood of performing ULC flushes due to 'Full 
> ULC'; while this isn't the kiss of death, you may want to also consider 
> reconfiguring your dataserver to use a larger ULC in order to reduce the 
> number of flushes due to 'Full ULC'
>
> The general idea for wrapping several inserts/updates in a single 
> transaction:
>
> ===========================
> declare @loop_counter int
> select  @loop_counter = 0
>
> -- start the first txn wrapper
> begin tran
>
> while <have_rows_to_process>
> begin
>          select @loop_counter = @loop_counter + 1
>
>          <insert_and/or_update>
>
>          -- adjust following modulo test based on
>          -- number of records you want in a single
>          -- transaction wrapper
>
>          if @loop_counter % 100 = 0
>          begin
>                  -- close out the current txn wrapper
>                  commit tran
>
>                  -- then open a new txn wrapper
>                  begin tran
>          end
> end
>
> -- close out the last txn wrapper
> commit tran
> ==========================
>
> Make sure you add error checking and rollback logic as you see fit.
>
>>   SQL Statement Cache:
>>     Statements Cached                 0.0           0.0          8       
>> n/a
>>     Statements Found in Cache       177.7           7.7     159942       
>> n/a
>>     Statements Not Found              0.0           0.0          8       
>> n/a
>>     Statements Dropped                0.0           0.0          0       
>> n/a
>>     Statements Restored               0.0           0.0          1       
>> n/a
>>     Statements Not Cached             0.0           0.0          0       
>> n/a
>
> ... snip ...
>
>> (by the way each showplan told me 'Executing a newly cached
>> statement (SSQL_ID = 474133198).' why isn't it using and
>> existing pal as i run this batch about 10 times today and my
>> statement cache doesn't seem to be full)
>
> sp_sysmon is showing that the dataserver found the vast majority of your 
> queries in statement cache, to the tune of 177.7 per second ... that's good.
>
> As for the message about 'Executing a newly cached statement' ... *shrug* ... 
> how many times do you actually see this message?

-- 
-- Simon --


0
Simon
8/26/2010 8:22:56 PM

Simon wrote:
> Hello Mark,
> First of all, I don't have any way to modify this batch, AS the PROD 
> upgrade is scheduled in two weeks I have little possibility to upgrade 
> my batch

Too bad, from the bits and pieces you've provided it sounds like a modification of the batch might give you a decent 
boost in performance.

----------

Another option would be to break your batch into several smaller batches that you could run in parallel.

However, this would probably increase contention on the log as well as introduce some contention between the various 
batches running in parallel (ie, depends on the locking scheme of the target table and the actual queries).

> As far as I can see every time we select/insert/update data we connect 
> to the DB but as it's donne serialy, this is what I meant by unique 
> session.

If you're actually connecting/disconnecting for each select/insert/update ... that's a *big* time consumer.  There is a 
sizable overhead for the dataserver to process your connect/disconnect, and this overhead is magnified by doing it 
thousands of times in quick succession.

In some instances I've seen the connect/disconnect overhead account for anywhere from 25-80% of total run time.

Ideally you want to connect once, do all of your select/insert/update activity and then disconnect.

To know if this is what you're doing, check the 'Connections Opened' entry from the sp_sysmon session.  If there's 
little/no other activity in the dataserver during your batch run then this number should be at/near 0/sec.

----------

Also, you mention select/insert/update ...

Does this mean you're updating the same record that you just inserted?  If so, obviously (?) it would make more sense to 
  derive the final set of data (eg, stored in local @variables) and then perform a single insert, ie, eliminate the 
overhead of the update ... 'course, that would also require a modification of your batch.

> Regarding your last question, maybe my test wasn't a good one. Basically 
> what I did today is running the batch several time (by the way 'enable 
> literal autoparam' is set). Then thanks to awk/sed I was able to isolate 
> my top SQL queries (11) and get the plan with  'set showplan on'; as my 
> batch was run several times I was supposed to find the plan for each of 
> them in the Stmt Cache but the message make me think the opposite...
> 
> Maybe I should execute each query several time but it this case my Stmt 
> Cache is not usefull as it should be.
> 
> 
> Simon
>>
>>> It's using a unique Session and Commit each inserts/updates.
>>
>> I'm not sure what you mean by 'unique Session' ... are you logging out 
>> and back in for each insert/update?
>>
>> What does sp_sysmon show for 'Connections Opened'?
>>
>> As for the sp_sysmon ouptut, a couple items caught my eye ...
>>
>>>   ULC Flushes to Xact Log         per sec      per xact      count  % 
>>> of total
>>>   -------------------------  ------------  ------------ ----------  
>>> ----------
>>>     by Full ULC                       0.1           0.0         
>>> 64       0.3 %
>>>     by End Transaction               22.8           1.0      
>>> 20500      98.1 %
>>> Avg # Writes per Log Page           n/a           n/a   
>>> 12.78155       n/a
>>
>> 98.1% of your ULC flushes are due to the completion of a transaction.
>>
>> The 'Avg # Writes per Log Page' is 12.78155.  This means that you were 
>> performing (roughly) 13 separate transactions, all of which fit on the 
>> same log page.
>>
>> These 2 items tend to imply that you're processing your 
>> inserts/updates in small batches (perhaps one at a time?).
>>
>> Have you considered wrapping groups of inserts/updates in a 
>> transaction wrapper?  The idea is to stuff several record 
>> modifications into a single log write.  This should show up as a lower 
>> number for 'Avg # Writes per Log Page'.
>>
>> The grouping of several inserts/updates into a single transaction 
>> wrapper also cuts down on the number of physical writes to the log 
>> device.  For a user database where 'delayed commit' is disabled, this 
>> can mean a big savings in overall run time for your process since 
>> you'll be waiting a lot less time for the 'slow' disk writes to 
>> complete.  Even if your database has 'delayed commit' enabled (eg, 
>> temporary database), the lower number of log writes means less of a 
>> workload (ie, fewer disk IOs) on your log device.
>>
>> How many rows to stuff into a single transaction is up to you, ie, 
>> you'll want to experiment some.
>>
>> Some items to take into consideration ...
>>
>> - the more records you stuff into a transaction wrapper the more locks 
>> you stand to obtain, which increases your chances of blocking someone 
>> else trying to access those same records; you'll also want to keep an 
>> eye on the max number of locks your dataserver can manage [probably 
>> not an issue with just a few 100/1000 inserts/updates]
>>
>> - if your queries have to perform a lot of disk IOs (eg, read data 
>> from disk into cache) then your transaction will be held open longer, 
>> which in turn increases your chances of blocking someone else with one 
>> of your locks
>>
>> - if you hold the transaction open too long you can delay the timely 
>> truncation of your transaction log, which in turn can increase your 
>> chances of filling up the log [most likely not an issue if you're 
>> grouping a few 100/1000 inserts/updates and most/all pages are in 
>> cache, ie, each transaction should only take a few seconds at most to 
>> complete]
>>
>> - as you increase the volume of inserts/updates in a single 
>> transaction wrapper you increase the likelihood of performing ULC 
>> flushes due to 'Full ULC'; while this isn't the kiss of death, you may 
>> want to also consider reconfiguring your dataserver to use a larger 
>> ULC in order to reduce the number of flushes due to 'Full ULC'
>>
>> The general idea for wrapping several inserts/updates in a single 
>> transaction:
>>
>> ===========================
>> declare @loop_counter int
>> select  @loop_counter = 0
>>
>> -- start the first txn wrapper
>> begin tran
>>
>> while <have_rows_to_process>
>> begin
>>          select @loop_counter = @loop_counter + 1
>>
>>          <insert_and/or_update>
>>
>>          -- adjust following modulo test based on
>>          -- number of records you want in a single
>>          -- transaction wrapper
>>
>>          if @loop_counter % 100 = 0
>>          begin
>>                  -- close out the current txn wrapper
>>                  commit tran
>>
>>                  -- then open a new txn wrapper
>>                  begin tran
>>          end
>> end
>>
>> -- close out the last txn wrapper
>> commit tran
>> ==========================
>>
>> Make sure you add error checking and rollback logic as you see fit.
>>
>>>   SQL Statement Cache:
>>>     Statements Cached                 0.0           0.0          
>>> 8       n/a
>>>     Statements Found in Cache       177.7           7.7     
>>> 159942       n/a
>>>     Statements Not Found              0.0           0.0          
>>> 8       n/a
>>>     Statements Dropped                0.0           0.0          
>>> 0       n/a
>>>     Statements Restored               0.0           0.0          
>>> 1       n/a
>>>     Statements Not Cached             0.0           0.0          
>>> 0       n/a
>>
>> ... snip ...
>>
>>> (by the way each showplan told me 'Executing a newly cached
>>> statement (SSQL_ID = 474133198).' why isn't it using and
>>> existing pal as i run this batch about 10 times today and my
>>> statement cache doesn't seem to be full)
>>
>> sp_sysmon is showing that the dataserver found the vast majority of 
>> your queries in statement cache, to the tune of 177.7 per second ... 
>> that's good.
>>
>> As for the message about 'Executing a newly cached statement' ... 
>> *shrug* ... how many times do you actually see this message?
> 
0
Mark
8/26/2010 9:00:56 PM
some clarification ...

"To know if this is what you're doing, check the 'Connections Opened' entry from the sp_sysmon session.  If there's 
little/no other activity in the dataserver during your batch run then this number should be at/near 0/sec."

Connections Opened
===================
- at/near 0 => you're re-using the connection, this is good

- high number (eg, 2/10/30/100 per second) => you're doing a lot of connect/disconnects, this is bad for performance


Mark A. Parsons wrote:
> 
> 
> Simon wrote:
>> Hello Mark,
>> First of all, I don't have any way to modify this batch, AS the PROD 
>> upgrade is scheduled in two weeks I have little possibility to upgrade 
>> my batch
> 
> Too bad, from the bits and pieces you've provided it sounds like a 
> modification of the batch might give you a decent boost in performance.
> 
> ----------
> 
> Another option would be to break your batch into several smaller batches 
> that you could run in parallel.
> 
> However, this would probably increase contention on the log as well as 
> introduce some contention between the various batches running in 
> parallel (ie, depends on the locking scheme of the target table and the 
> actual queries).
> 
>> As far as I can see every time we select/insert/update data we connect 
>> to the DB but as it's donne serialy, this is what I meant by unique 
>> session.
> 
> If you're actually connecting/disconnecting for each 
> select/insert/update ... that's a *big* time consumer.  There is a 
> sizable overhead for the dataserver to process your connect/disconnect, 
> and this overhead is magnified by doing it thousands of times in quick 
> succession.
> 
> In some instances I've seen the connect/disconnect overhead account for 
> anywhere from 25-80% of total run time.
> 
> Ideally you want to connect once, do all of your select/insert/update 
> activity and then disconnect.
> 
> To know if this is what you're doing, check the 'Connections Opened' 
> entry from the sp_sysmon session.  If there's little/no other activity 
> in the dataserver during your batch run then this number should be 
> at/near 0/sec.
> 
> ----------
> 
> Also, you mention select/insert/update ...
> 
> Does this mean you're updating the same record that you just inserted?  
> If so, obviously (?) it would make more sense to  derive the final set 
> of data (eg, stored in local @variables) and then perform a single 
> insert, ie, eliminate the overhead of the update ... 'course, that would 
> also require a modification of your batch.
> 
>> Regarding your last question, maybe my test wasn't a good one. 
>> Basically what I did today is running the batch several time (by the 
>> way 'enable literal autoparam' is set). Then thanks to awk/sed I was 
>> able to isolate my top SQL queries (11) and get the plan with  'set 
>> showplan on'; as my batch was run several times I was supposed to find 
>> the plan for each of them in the Stmt Cache but the message make me 
>> think the opposite...
>>
>> Maybe I should execute each query several time but it this case my 
>> Stmt Cache is not usefull as it should be.
>>
>>
>> Simon
>>>
>>>> It's using a unique Session and Commit each inserts/updates.
>>>
>>> I'm not sure what you mean by 'unique Session' ... are you logging 
>>> out and back in for each insert/update?
>>>
>>> What does sp_sysmon show for 'Connections Opened'?
>>>
>>> As for the sp_sysmon ouptut, a couple items caught my eye ...
>>>
>>>>   ULC Flushes to Xact Log         per sec      per xact      count  
>>>> % of total
>>>>   -------------------------  ------------  ------------ ----------  
>>>> ----------
>>>>     by Full ULC                       0.1           0.0         
>>>> 64       0.3 %
>>>>     by End Transaction               22.8           1.0      
>>>> 20500      98.1 %
>>>> Avg # Writes per Log Page           n/a           n/a   
>>>> 12.78155       n/a
>>>
>>> 98.1% of your ULC flushes are due to the completion of a transaction.
>>>
>>> The 'Avg # Writes per Log Page' is 12.78155.  This means that you 
>>> were performing (roughly) 13 separate transactions, all of which fit 
>>> on the same log page.
>>>
>>> These 2 items tend to imply that you're processing your 
>>> inserts/updates in small batches (perhaps one at a time?).
>>>
>>> Have you considered wrapping groups of inserts/updates in a 
>>> transaction wrapper?  The idea is to stuff several record 
>>> modifications into a single log write.  This should show up as a 
>>> lower number for 'Avg # Writes per Log Page'.
>>>
>>> The grouping of several inserts/updates into a single transaction 
>>> wrapper also cuts down on the number of physical writes to the log 
>>> device.  For a user database where 'delayed commit' is disabled, this 
>>> can mean a big savings in overall run time for your process since 
>>> you'll be waiting a lot less time for the 'slow' disk writes to 
>>> complete.  Even if your database has 'delayed commit' enabled (eg, 
>>> temporary database), the lower number of log writes means less of a 
>>> workload (ie, fewer disk IOs) on your log device.
>>>
>>> How many rows to stuff into a single transaction is up to you, ie, 
>>> you'll want to experiment some.
>>>
>>> Some items to take into consideration ...
>>>
>>> - the more records you stuff into a transaction wrapper the more 
>>> locks you stand to obtain, which increases your chances of blocking 
>>> someone else trying to access those same records; you'll also want to 
>>> keep an eye on the max number of locks your dataserver can manage 
>>> [probably not an issue with just a few 100/1000 inserts/updates]
>>>
>>> - if your queries have to perform a lot of disk IOs (eg, read data 
>>> from disk into cache) then your transaction will be held open longer, 
>>> which in turn increases your chances of blocking someone else with 
>>> one of your locks
>>>
>>> - if you hold the transaction open too long you can delay the timely 
>>> truncation of your transaction log, which in turn can increase your 
>>> chances of filling up the log [most likely not an issue if you're 
>>> grouping a few 100/1000 inserts/updates and most/all pages are in 
>>> cache, ie, each transaction should only take a few seconds at most to 
>>> complete]
>>>
>>> - as you increase the volume of inserts/updates in a single 
>>> transaction wrapper you increase the likelihood of performing ULC 
>>> flushes due to 'Full ULC'; while this isn't the kiss of death, you 
>>> may want to also consider reconfiguring your dataserver to use a 
>>> larger ULC in order to reduce the number of flushes due to 'Full ULC'
>>>
>>> The general idea for wrapping several inserts/updates in a single 
>>> transaction:
>>>
>>> ===========================
>>> declare @loop_counter int
>>> select  @loop_counter = 0
>>>
>>> -- start the first txn wrapper
>>> begin tran
>>>
>>> while <have_rows_to_process>
>>> begin
>>>          select @loop_counter = @loop_counter + 1
>>>
>>>          <insert_and/or_update>
>>>
>>>          -- adjust following modulo test based on
>>>          -- number of records you want in a single
>>>          -- transaction wrapper
>>>
>>>          if @loop_counter % 100 = 0
>>>          begin
>>>                  -- close out the current txn wrapper
>>>                  commit tran
>>>
>>>                  -- then open a new txn wrapper
>>>                  begin tran
>>>          end
>>> end
>>>
>>> -- close out the last txn wrapper
>>> commit tran
>>> ==========================
>>>
>>> Make sure you add error checking and rollback logic as you see fit.
>>>
>>>>   SQL Statement Cache:
>>>>     Statements Cached                 0.0           0.0          
>>>> 8       n/a
>>>>     Statements Found in Cache       177.7           7.7     
>>>> 159942       n/a
>>>>     Statements Not Found              0.0           0.0          
>>>> 8       n/a
>>>>     Statements Dropped                0.0           0.0          
>>>> 0       n/a
>>>>     Statements Restored               0.0           0.0          
>>>> 1       n/a
>>>>     Statements Not Cached             0.0           0.0          
>>>> 0       n/a
>>>
>>> ... snip ...
>>>
>>>> (by the way each showplan told me 'Executing a newly cached
>>>> statement (SSQL_ID = 474133198).' why isn't it using and
>>>> existing pal as i run this batch about 10 times today and my
>>>> statement cache doesn't seem to be full)
>>>
>>> sp_sysmon is showing that the dataserver found the vast majority of 
>>> your queries in statement cache, to the tune of 177.7 per second ... 
>>> that's good.
>>>
>>> As for the message about 'Executing a newly cached statement' ... 
>>> *shrug* ... how many times do you actually see this message?
>>
0
Mark
8/26/2010 9:03:51 PM
Hi Mark,

The value for Connections Opened is 0.
So I guess this batch opened a connection at the beginning then submit 
each query when needed.

As I was able to log all the queries I'll try to groupe them in order 
to decrease the COMMIT rate.

Simon
> some clarification ...
>
> "To know if this is what you're doing, check the 'Connections Opened' entry 
> from the sp_sysmon session.  If there's little/no other activity in the 
> dataserver during your batch run then this number should be at/near 0/sec."
>
> Connections Opened
> ===================
> - at/near 0 => you're re-using the connection, this is good
>
> - high number (eg, 2/10/30/100 per second) => you're doing a lot of 
> connect/disconnects, this is bad for performance
>
>
> Mark A. Parsons wrote:
>> 
>> 
>> Simon wrote:
>>> Hello Mark,
>>> First of all, I don't have any way to modify this batch, AS the PROD 
>>> upgrade is scheduled in two weeks I have little possibility to upgrade my 
>>> batch
>> 
>> Too bad, from the bits and pieces you've provided it sounds like a 
>> modification of the batch might give you a decent boost in performance.
>> 
>> ----------
>> 
>> Another option would be to break your batch into several smaller batches 
>> that you could run in parallel.
>> 
>> However, this would probably increase contention on the log as well as 
>> introduce some contention between the various batches running in parallel 
>> (ie, depends on the locking scheme of the target table and the actual 
>> queries).
>> 
>>> As far as I can see every time we select/insert/update data we connect to 
>>> the DB but as it's donne serialy, this is what I meant by unique session.
>> 
>> If you're actually connecting/disconnecting for each select/insert/update 
>> ... that's a *big* time consumer.  There is a sizable overhead for the 
>> dataserver to process your connect/disconnect, and this overhead is 
>> magnified by doing it thousands of times in quick succession.
>> 
>> In some instances I've seen the connect/disconnect overhead account for 
>> anywhere from 25-80% of total run time.
>> 
>> Ideally you want to connect once, do all of your select/insert/update 
>> activity and then disconnect.
>> 
>> To know if this is what you're doing, check the 'Connections Opened' entry 
>> from the sp_sysmon session.  If there's little/no other activity in the 
>> dataserver during your batch run then this number should be at/near 0/sec.
>> 
>> ----------
>> 
>> Also, you mention select/insert/update ...
>> 
>> Does this mean you're updating the same record that you just inserted?  If 
>> so, obviously (?) it would make more sense to  derive the final set of data 
>> (eg, stored in local @variables) and then perform a single insert, ie, 
>> eliminate the overhead of the update ... 'course, that would also require a 
>> modification of your batch.
>> 
>>> Regarding your last question, maybe my test wasn't a good one. Basically 
>>> what I did today is running the batch several time (by the way 'enable 
>>> literal autoparam' is set). Then thanks to awk/sed I was able to isolate 
>>> my top SQL queries (11) and get the plan with  'set showplan on'; as my 
>>> batch was run several times I was supposed to find the plan for each of 
>>> them in the Stmt Cache but the message make me think the opposite...
>>>
>>> Maybe I should execute each query several time but it this case my Stmt 
>>> Cache is not usefull as it should be.
>>>
>>>
>>> Simon
>>>>
>>>>> It's using a unique Session and Commit each inserts/updates.
>>>>
>>>> I'm not sure what you mean by 'unique Session' ... are you logging out 
>>>> and back in for each insert/update?
>>>>
>>>> What does sp_sysmon show for 'Connections Opened'?
>>>>
>>>> As for the sp_sysmon ouptut, a couple items caught my eye ...
>>>>
>>>>>   ULC Flushes to Xact Log         per sec      per xact      count  % of 
>>>>> total
>>>>>   -------------------------  ------------  ------------ ----------  
>>>>> ----------
>>>>>     by Full ULC                       0.1           0.0         64       
>>>>> 0.3 %
>>>>>     by End Transaction               22.8           1.0      20500      
>>>>> 98.1 %
>>>>> Avg # Writes per Log Page           n/a           n/a   12.78155       
>>>>> n/a
>>>>
>>>> 98.1% of your ULC flushes are due to the completion of a transaction.
>>>>
>>>> The 'Avg # Writes per Log Page' is 12.78155.  This means that you were 
>>>> performing (roughly) 13 separate transactions, all of which fit on the 
>>>> same log page.
>>>>
>>>> These 2 items tend to imply that you're processing your inserts/updates 
>>>> in small batches (perhaps one at a time?).
>>>>
>>>> Have you considered wrapping groups of inserts/updates in a transaction 
>>>> wrapper?  The idea is to stuff several record modifications into a single 
>>>> log write.  This should show up as a lower number for 'Avg # Writes per 
>>>> Log Page'.
>>>>
>>>> The grouping of several inserts/updates into a single transaction wrapper 
>>>> also cuts down on the number of physical writes to the log device.  For a 
>>>> user database where 'delayed commit' is disabled, this can mean a big 
>>>> savings in overall run time for your process since you'll be waiting a 
>>>> lot less time for the 'slow' disk writes to complete.  Even if your 
>>>> database has 'delayed commit' enabled (eg, temporary database), the lower 
>>>> number of log writes means less of a workload (ie, fewer disk IOs) on 
>>>> your log device.
>>>>
>>>> How many rows to stuff into a single transaction is up to you, ie, you'll 
>>>> want to experiment some.
>>>>
>>>> Some items to take into consideration ...
>>>>
>>>> - the more records you stuff into a transaction wrapper the more locks 
>>>> you stand to obtain, which increases your chances of blocking someone 
>>>> else trying to access those same records; you'll also want to keep an eye 
>>>> on the max number of locks your dataserver can manage [probably not an 
>>>> issue with just a few 100/1000 inserts/updates]
>>>>
>>>> - if your queries have to perform a lot of disk IOs (eg, read data from 
>>>> disk into cache) then your transaction will be held open longer, which in 
>>>> turn increases your chances of blocking someone else with one of your 
>>>> locks
>>>>
>>>> - if you hold the transaction open too long you can delay the timely 
>>>> truncation of your transaction log, which in turn can increase your 
>>>> chances of filling up the log [most likely not an issue if you're 
>>>> grouping a few 100/1000 inserts/updates and most/all pages are in cache, 
>>>> ie, each transaction should only take a few seconds at most to complete]
>>>>
>>>> - as you increase the volume of inserts/updates in a single transaction 
>>>> wrapper you increase the likelihood of performing ULC flushes due to 
>>>> 'Full ULC'; while this isn't the kiss of death, you may want to also 
>>>> consider reconfiguring your dataserver to use a larger ULC in order to 
>>>> reduce the number of flushes due to 'Full ULC'
>>>>
>>>> The general idea for wrapping several inserts/updates in a single 
>>>> transaction:
>>>>
>>>> ===========================
>>>> declare @loop_counter int
>>>> select  @loop_counter = 0
>>>>
>>>> -- start the first txn wrapper
>>>> begin tran
>>>>
>>>> while <have_rows_to_process>
>>>> begin
>>>>          select @loop_counter = @loop_counter + 1
>>>>
>>>>          <insert_and/or_update>
>>>>
>>>>          -- adjust following modulo test based on
>>>>          -- number of records you want in a single
>>>>          -- transaction wrapper
>>>>
>>>>          if @loop_counter % 100 = 0
>>>>          begin
>>>>                  -- close out the current txn wrapper
>>>>                  commit tran
>>>>
>>>>                  -- then open a new txn wrapper
>>>>                  begin tran
>>>>          end
>>>> end
>>>>
>>>> -- close out the last txn wrapper
>>>> commit tran
>>>> ==========================
>>>>
>>>> Make sure you add error checking and rollback logic as you see fit.
>>>>
>>>>>   SQL Statement Cache:
>>>>>     Statements Cached                 0.0           0.0          8       
>>>>> n/a
>>>>>     Statements Found in Cache       177.7           7.7     159942       
>>>>> n/a
>>>>>     Statements Not Found              0.0           0.0          8       
>>>>> n/a
>>>>>     Statements Dropped                0.0           0.0          0       
>>>>> n/a
>>>>>     Statements Restored               0.0           0.0          1       
>>>>> n/a
>>>>>     Statements Not Cached             0.0           0.0          0       
>>>>> n/a
>>>>
>>>> ... snip ...
>>>>
>>>>> (by the way each showplan told me 'Executing a newly cached
>>>>> statement (SSQL_ID = 474133198).' why isn't it using and
>>>>> existing pal as i run this batch about 10 times today and my
>>>>> statement cache doesn't seem to be full)
>>>>
>>>> sp_sysmon is showing that the dataserver found the vast majority of your 
>>>> queries in statement cache, to the tune of 177.7 per second ... that's 
>>>> good.
>>>>
>>>> As for the message about 'Executing a newly cached statement' ... *shrug* 
>>>> ... how many times do you actually see this message?

-- 
-- Simon --


0
Simon
8/27/2010 7:15:02 AM
Reply:

Similar Artilces:

Deleting thousands of users
We are a K12 school district and are moving students from GroupWise to NetMail. I need to delete about 20K student accounts. In Console1 I can delete the GroupWise account but have to answer a prompt (GroupWise account deleted) after each one. Is there a way to supress the message or a better way to do mass deletes? FYI - We are GroupWise 6.5.2 with eDir 8.6 > We are a K12 school district and are moving students from GroupWise to > NetMail. I need to delete about 20K student accounts. > In Console1 I can delete the GroupWise account but have to answer a prompt > (...

Delete thousands of emails at once
I have a mailbox which contains thousands of emails which may be deleted. The clients always hangs when I try to delete all (or a lot of) emails at once. Is there a way to clean up this mailbox. > I have a mailbox which contains thousands of emails which may be deleted. > The clients always hangs when I try to delete all (or a lot of) emails at > once. > Is there a way to clean up this mailbox. If the messages all have the same subject - then you can use the subjectpurge switch with gwcheck to delete the emails. see http://support.novell.com/techcenter/search/s...

Inserting Into SQL DB thousands and thousands of times
Hello All  I have a generic list of objects which I insert into a SQL database.  Currently I go through each Person object in the list and insert it into the database.  Each list can contain 1500 – 5000 Person objects And I can have any thing from 1 to 100 generic lists'.  As you can see I can be hitting the database thousands and thousands of times inserting each Person.  The DBA is not happy…. Is there a better way to insert into the database  Remember to click “Mark as Answer” on the post if it helped you. Thank you! Investigate BULK insert...

HINT: Performance improvement!!!
I noticed that the performance of GetPhotos is pretty bad, especially for albums with a lot of large photos.The method is called when you open an album and have the thumbnails displayed. It's also called by default.aspx to get the random photo - I can see there's already a lot of discussion elsewhere on that topic.The problem is caused by the stored procedure GetPhotos, which does a select *. This returns ALL the data including the images - but the GetPhotos method in the PhotoManager class only uses 3 columns: PhotoID, AlbumID and Caption.So select * returns maybe 100Mb of data, most of whi...

How to improve update performance?
Folks, Wondering the best way to improve update performance. I have two tables both in tempdb. Base table can have upwards of 500K rows, work table can have max of 25K rows. Both tables are not indexed. (Adding indexes has exasperated the problem in the past.) When updating base table, from work table (joining on one field [non-unique in base table], updates can take between 2 - 4 minutes. Since our processing may indicate the need to do this update hundreds of time, you can quickly see what our performance problem is. We're using: UPDATE BASE ...

Why is no Delete method created for objDataSource with 'Generate Insert, Update, and Delete statement' Advance option selected?
I have created an objDataSource with a Method and linked to a GridView control.  On the VS2005/ASPnet 2.0 webpage, clicking the 'Delete' buttonon the GridView toolbox, the following error message was popped up:  Exception Details: System.NotSupportedException: Deleting is not supported by ObjectDataSource 'ObjectDataSource1' unless the DeleteMethod is specified.I reviewed  the 'Advanced Options' button of the TableAdapter Configuration Wizard,  the "Generate Insert, Update and Delete statements" checkboxwas checked fo...

Detailsview Select/Delete/Update
Here's what I am trying to do? I have page1 with datagrid with partial info of supplier, I have a hyperlink in datagrid which takes to the next page for detail info of supplier in detailsview? I am using storedprocedure, ObjectDataSource, datasource and datatableadapter? I have one table adapter and three methods (selct update delete0I have setup querystring parameter for select?now I am trying to do same for delete and update but when I try to change any parameter it changes the parameter for select too... what am I doing wrong here? Where are you changing the parameters for the Object...

improving performance of cascaded delete
Hi, What is the best possible manner to manager casacaded deletion of records from the database ... In my app, several child tables are referring to one master table and so far the deletion has been quite ok. Recently , i guess after using ASA 6.0 in the place of SA 5.5 seems to slow down the process quite significantly. I tried creating new database without JAVA options in ASA .. but to no performance gain. Generally is there any optimization techniques that can be employed here ... Can CHECK ON COMMIT or WAIT_FOR_COMMIT options provide me any gains??? Thanks in Advance Prasad...

How to Improve performance of Update Panel
 Hi,      I am using  ajax update panels .there are 11 update panels in one page.8 having 'always' updatemode and 3 having 'Conditional' update mode.i also made a ScriptMode="Release" in script manager but the performance of this page is very poor.How i improve the performance of this page.View Bloglinkedin Asp.net GroupDon't forget to click “Mark as Answer” on the post that helped you EnableViewState="false" on the page for one thing.  Can you post your source code?  It will be easier to give sugges...

ASE on AIX
Hi, I've just started a contract where ASE is installed on AIX (an old 11.9.2.5 on AIX 4.something, and 12.5.0.3 on AIX 5.something). It's my first time on AIX... so any tuning hints that I should check? Thanks! Michael mpeppler@peppler.org wrote: > Hi, > > I've just started a contract where ASE is installed on AIX > (an old 11.9.2.5 on AIX 4.something, and 12.5.0.3 on AIX > 5.something). > > It's my first time on AIX... so any tuning hints that I > should check? > > Thanks! > > Michael Mike, The worst thing that ever bit us on AIX is that you need to make sure that the Virtual Memory Manager (AIX) is set to use a low percentage of memory for file pages (on AIX 5.2 this is the 'vmo' command, on 4.3.3 'vmtune' is used.) Otherwise, ASE is forced to swap, and it was our experience that locking shared memory segments via the Sybase configuration file didn't work in AIX 4.3.x. Don't know how this works on 5.2, since it's not necessary if AIX cache is turned down sufficiently. You definitely want to use RAID for your disks, since you can eliminate Sybase mirroring. We saw write rates of 30MB/sec for DISK INITs on the new hardware. Turn off 'dsync' for tempdb devices. The Regatta p690 servers have a lot of horsepower, and our Sybase performance improved by a couple orders of magnitude for some queries. You might want to get s...

how to improve performance of Update Panel
 Hi,      I am using  ajax update panels .there are 11 update panels in one page.8 having 'always' updatemode and 3 having 'Conditional' update mode.i also made a ScriptMode="Release" in script manager but the performance of this page is very poor.How i improve the performance of this page.View Bloglinkedin Asp.net GroupDon't forget to click “Mark as Answer” on the post that helped you chetan.sarode suggest me   " <asp:ScriptManager ID="ScriptManager1" runat="server"   Ena...

Batch Inserting / Updating / Deleting
I am developing an application that is using the concept of an entire page being put into edit mode or read-only mode. When in read only mode, no changes should be saved to the database until a big 'Save' button is clicked at the top of the page. A single page I am putting together is based on a Office Network. You have the base details (which are done through a formview) such as Domain Name, Username format, etc. But you can also add any number of Servers underneath. the servers are  managed through a Gridview, so each item is inserted, then the gridview is rebound. This is a p...

Deleting thousands of Items Permanently
Hi There! I have a client who for some reason has 200,000 items in his mailbox. I need to remove them but doing it via the client seems to be taking a massive amount of time. How can I do this more quickly? Is there a GWCheck option that will remove everything before a certain date from the mailbox and trash bin? I have set his autodelete to 1 day but it didn't work. GWCheck Expire/Reduce is the one you are looking for Cheers Dave -- Dave Parkes [NSCS] Occasionally resident at http://support-forums.novell.com/ ...

Selecting, updating and deleting through gridview.
Hi guys,I need to use the built in functionality for updating and deleting a record from my gridview. I get it working by using a sqldatasource but I want to be able to do it in the code behind with a sqldataadapter. I can select and bind the gridview but i dont know where to code the update and deletes.This is what I have: try { using(SqlConnection conn = new SqlConnection(ConfigurationManager.ConnectionStrings["OMS"].ConnectionString)) { conn.Open(); using(SqlCommand cmd = new SqlCommand("selectStuff&q...

insert, select, update and delete
I've got four pages with in the first page a insert, in the second a select, in the thirth a update and in the fourth a delete statement. First the values of a textbox will be inserted in the database, then the values will be shown in labels and than it is possible to edit or delete the values inserted. Every inserted item belonging to each other has one ID. The follwing values has a second ID etc. How can I make that possible?? I think that I should pass the ID's between the pages so I'm sure that I edit or delete the values that I want. So insert value 1 in page 1, show with select v...

Improvement of update performance in 12.5
Hi All, Sybase claim significant performance improvements for updates issued to IQ via ODCB in 12.5. Does anybody have any real experience and numbers they can quote for update performance improvement? I'd be very interested to know because on the IWS implementations I have done the need to delete/insert rather than update has caused us considerable amounts of extra work. Best Regards Peter Nolan www.peternolan.com ...

Fine Tuning a Database to improve performance.
Hi, I have created a database using ASE 15. I previously used ASA9. I have just migrated my database from ASA9 to ASE15. I compared the performance and it turns out that ASE15 is much much slower than ASA9. It would take like 2 seconds just to retrieve a single row. I only have 5 rows in that table and I only have a very simple database just for testing purposes such that there is not much links or foreign keys. This table I am querying is like a standalone table. ie., there is no references. but how come this is too slow? This is not the case with ASA9. What configuration field...

Thousands of ID's to delete....
Literally. We've got a big tree, and I've been charged with cleaning up what other's have neglected for years. I have a spread sheet of fully distinguished ID's to delete that I can manipulate into a text file, delimited file, etc. Any ideas on how to efficiently (read automatically) delete them all? If there was a command line utility out there to delete objects from the tree, I could easily create a batch file. Are there any? I thought about using MacroExpress to manipulate NWAdmin. Bring up a search list of all the user objects and let it sort through ...

Delete, Update Selecting Wrong Item
<%@ Page Explicit="True" Language="VB" Debug="True" %> <%@ import Namespace="System.Data" %> <%@ import Namespace="System.Data.OleDb" %> <script runat="server"> 'Sub userList(Sender As Object, E As EventArgs) Sub Page_Load(Sender As Object, E As EventArgs) Dim aConnectionString As String = "Provider=Microsoft.Jet.OLEDB.4.0; Data Source=" _ & server.mappath("Group17.mdb") Dim aConnection As OleDbConnection Dim aQuery As String ...

My SQLDataSource Select/Update/Insert/Delete
hi i have form view that retrieves a single row i dont want to use SQLDataSource default Select/Update/Insert/Delete buttons i am using Stored Procedures I want to have my own buttons, the select has parameters, how to retrieve data for that parameter?   I guess I'm not understanding what you exactly want. When configuring the SqlDataSource within the Design window you can specify to retrieve your data using a stored procedure. Using a Parameter with a stored procedure is the same as that for a query.Thanks, EdMicrosoft MVP - ASP/ASP.NET when we generate form view. the...

Kindergarten-level hints for ASE tuning
I'm a vendor of a vertical-market product, based on ASE 11.9.2 on Sun Solaris. I install a turnkey server at each one of my customer's sites, about 10 so far. I set up some sort of unlimited network access (either private dial-up or Internet) and that's how I babysit the servers. Generally the customer never touches the server, he doesn't even know his sa or root password. Since these ASE-on-Sun systems are so reliable I can sometimes go for weeks without needing to perform support. My knowledge of ASE administration is limited, even though I've been doing this si...

Performance improvement while Deleting from Huge tables
Hi All, We are facing a problem in ASE 11.9.2 while deleting more than 75% of data from a table. Details : MAIN_TABLE(ID int, MO int, other 30 columns ) contains 35497695 rows. Index is : ID, MO ADM_TABLE(ID int, TIME datetime) contains 9415 rows Index is : ID There is a 1:N relation between ADM_TABLE and MAIN_TABLE (Without referential integrity). It is required to delete the rows from MAIN_TABLE, where ID not in ID from ADM_TABLE. When we use deletion in batches of 1000 rows, it took 2 minutes to delete 75% of data in MAIN_TABLE Is there any way to improve the performance ? Thanks in advance Regards Vasanth When deleting such large amounts of data, the usual Sybase method for applying and logging the changes becomes inefficient, as it is done row-by-row, on the (usually quite correct) assumption that the changes within a transaction are small compared to the total size of the table. In your case, you are deleting most of your data. This means that other methods of achieving the same end would be quicker. In any case, two minutes seems very fast. Was it not two hours? Here are a few possibilities for faster methods: 1. Reduce index and extent management to a minimum by bulk copying the data in and out: a.Create a view of the rows you want to keep (e.g. "create view MAIN_TABLE_VIEW as select MAIN_TABLE.* from MAIN_TABLE, ADM_TABLE where MAIN_TABLE.ID = ADM_TABLE.I...

Performing Batch updates using OleDbDataAdapter
Hi Guys I was looking for a method which can perform batch updates using OleDbDataAdapter. Whenever I try to use oleDbDataAdapter.UpdateBatchSize it throws an error that method is not supported. Any help wud be highly appreciable.Thanks  G.S Hi Guys I am still waiting for a reply. Thanks in anticipation   G.S Hi gurmeetsingh7050 , Sorry I'm a little confused after reading your question. If you want to perform "batch update " using oledbdataadapter, why not use oledbdataadapter.Update() method? Pass in a dataset or datarow array, all the data in your ...

Is it possiable to improve the performance of update Panel?
  i am using 18 dropdownlist in Grid view  and all are bind with sqldatasource . . so here i have 18 Selected indexchange or in other word 18 trips from  database to page. each time Page is post back . so finally have 18 postbacks   i used Update Panel to over come the effects of post back . but Update Panel made my page performance is very poor. so is there any solution to improve the performance of update panel . by script manager or any  hint     Thanks in advance Hi,Read the following maybe it will help you:WebResource Embedded Resource...

Web resources about - ASE15: Hints to improve batch doing hundreds of thousands selects and thousands updates/deletes - sybase.ase.performance+tuning

Bezeklik Thousand Buddha Caves - Wikipedia, the free encyclopedia
42°57′21″N 89°32′22″E  /  42.95583°N 89.53944°E  / 42.95583; 89.53944 Coordinates : 42°57′21″N 89°32′22″E  /  42.95583°N 89.53944°E  / ...

VIDEO: Thousands and thousands of sharks surround swimmers off Florida
“You could throw a pebble and literally strike a shark,” he told CBS12. Although he has studied the migration for several years, Kajiura was ...

Migrant crisis: Thousands of Iraqi refugees leave Finland opting to go home
... disappointed with their life and prospects in the chilly Nordic country, cancel asylum applications to return home voluntarily. Helsinki: Thousands ...

A thousand to one, Tropfest returns against the odds
The famous short film festival is back on Sunday after its shock cancellation in December.

Migrant crisis: Thousands of Iraqi refugees leave Finland opting to go home
Young Iraqi men, disappointed with their life and prospects in the chilly Nordic country, cancel asylum applications to return home voluntarily. ...

Chinese New Year parade attended by thousands despite rain
... traditional Chinese dancers, more than 3,000 participants danced and marched their way through Chinatown for the annual parade. Tens of thousands ...

A huge 7-year-old bug has been found that affects hundreds of thousands of apps and devices
Researchers from Google have discovered a serious, seven-year-old vulnerability in code used by hundreds of thousands of apps and devices, the ...

Thousands of thumbtacks found in California dog park
Dog owners have picked up between 3,000 and 5,000 tacks from the Northern California park and handed them over to police

Thousands Of Migrants Flee Ucomfortable Finland For Iraq
Thousands Of Migrants Flee Ucomfortable Finland For Iraq

Samsung Fails To Secure Thousands Of SmartThings Homes From Thieves
Samsung SmartThings devices are vulnerable to thieves thanks to weaknesses originally reported to the company back in September. It's contributing ...

Resources last updated: 2/18/2016 7:11:59 AM