At our site, we just upgraded from 11.0.3.3 to 11.9.3 - in the process we changed machines, so: old configuration: ASE 11.0.3.3 on a Sparc 10 running Solaris 2.5.1, 1 processor, 100 Mb allocated to sybase new configuration: ASE 11.9.3 on a Sun E250 running Solaris 7, 2 processors, 200 Mb allocated to sybase. I'm running in to two performance related problems: 1) High throughput inserts/updates of tables. We have a data feed that pumps data into the database at a high rate of speed. We have more memory and faster disks on the new system, but the rate at which the inserts/updates occur is the same as the old configuration. I've setup a 16k buffer pool, and the set logiosize on the two databases that have the high number of inserts/updates. I've also set row level locking on the two databases. After these changes, I still see no improvement over the old configuration. I've check sp_sysmon and nothing seems to be wrong, but granted I'm not at expert at reading the output of sp_sysmon. Anything I should specifically be looking for? 2) Slow querying of tables involving sql with variables. I have the two following queries: query 1: select min(trade_time), max(trade_time) from history..underlying_ticks_fp where symbol = "SP" and instrument = "F" and exp_year = 100 and exp_month = 12 and price_type = "T" and price > 0 and trade_time between "7/31/2000 00:00:00" and "8/1/2000 00:00:00" query 2: declare @symbol symbol declare @exp_year int declare @exp_month int declare @start_time datetime declare @end_time datetime select @symbol = "SP" select @exp_year = 100 select @exp_month = 12 select @start_time = "7/31/2000 00:00:00" select @end_time = "8/1/2000 00:00:00" select min(trade_time), max(trade_time) from history..underlying_ticks_fp where symbol = @symbol and instrument = "F" and price_type = "T" and exp_month = @exp_month and exp_year = @exp_year and trade_time between @start_time and @end_time Now, in the new configuration, using set statistics time on, query 1 takes 30ms and query 2 takes 33420 ms. In the old configuration, both query 1 and query 2 took 6 ms. The table schemas between the new/old configurations are the same as well as the data in the tables. I've run update all statistics on all affected tables in the new configuration. I realize that the optimizer doesn't know what the variables are at compile time in query 2, however, in the old configuration of ASE it didn't seem to have the same sort of performance problem. Any ideas on what's going on for both of these problems would be greatly appreciated. -- ____________________ __ ____________________ \_____________ \ /_ \ / _____________/ \_____________ \_______/ \_______/ _____________/ \_________________________ _________________________/ /______\ Han Tunca Tradelink L.L.C. han@trdlnk.com
![]() |
0 |
![]() |
If your transactions are very small, your insert/update speed will be limited regardless of how large the buffers into the transaction log are. Is it possible for you to block multiple inserts and updates into larger transactions? When faced with a similar limitation, I found that solid state disk for the transaction log is the best solution. Han Tunca wrote: > At our site, we just upgraded from 11.0.3.3 to 11.9.3 - in the process > we changed machines, so: > > old configuration: > > ASE 11.0.3.3 on a Sparc 10 running Solaris 2.5.1, 1 processor, 100 Mb > allocated to sybase > > new configuration: > > ASE 11.9.3 on a Sun E250 running Solaris 7, 2 processors, 200 Mb > allocated to sybase. > > I'm running in to two performance related problems: > > 1) High throughput inserts/updates of tables. We have a data feed that > pumps data into > the database at a high rate of speed. We have more memory and faster > disks on the > new system, but the rate at which the inserts/updates occur is the > same as the > old configuration. I've setup a 16k buffer pool, and the set > logiosize on the two > databases that have the high number of inserts/updates. I've also > set row level > locking on the two databases. After these changes, I still see no > improvement over > the old configuration. I've check sp_sysmon and nothing seems to be > wrong, but > granted I'm not at expert at reading the output of sp_sysmon. > Anything I should > specifically be looking for? > > 2) Slow querying of tables involving sql with variables. I have the two > following > queries: > > query 1: > > select min(trade_time), max(trade_time) > from history..underlying_ticks_fp > where symbol = "SP" and instrument = "F" > and exp_year = 100 and exp_month = 12 > and price_type = "T" and price > 0 > and trade_time between "7/31/2000 00:00:00" and "8/1/2000 00:00:00" > > query 2: > > declare @symbol symbol > declare @exp_year int > declare @exp_month int > declare @start_time datetime > declare @end_time datetime > select @symbol = "SP" > select @exp_year = 100 > select @exp_month = 12 > select @start_time = "7/31/2000 00:00:00" > select @end_time = "8/1/2000 00:00:00" > select min(trade_time), max(trade_time) > from history..underlying_ticks_fp > where symbol = @symbol and instrument = "F" > and price_type = "T" and exp_month = @exp_month > and exp_year = @exp_year and trade_time > between @start_time and @end_time > > Now, in the new configuration, using set statistics time on, query 1 > takes 30ms > and query 2 takes 33420 ms. In the old configuration, both query 1 > and query 2 > took 6 ms. The table schemas between the new/old configurations are > the same as > well as the data in the tables. I've run update all statistics on > all affected > tables in the new configuration. I realize that the optimizer > doesn't know what > the variables are at compile time in query 2, however, in the old > configuration of > ASE it didn't seem to have the same sort of performance problem. > > Any ideas on what's going on for both of these problems would be greatly > appreciated. > -- > ____________________ __ ____________________ > \_____________ \ /_ \ / _____________/ > \_____________ \_______/ \_______/ _____________/ > \_________________________ _________________________/ > /______\ > Han Tunca > Tradelink L.L.C. > han@trdlnk.com
![]() |
0 |
![]() |
Han Tunca wrote: > > At our site, we just upgraded from 11.0.3.3 to 11.9.3 - in the process > we changed machines, so: > > old configuration: > > ASE 11.0.3.3 on a Sparc 10 running Solaris 2.5.1, 1 processor, 100 Mb > allocated to sybase > > new configuration: > > ASE 11.9.3 on a Sun E250 running Solaris 7, 2 processors, 200 Mb > allocated to sybase. Are you sure its 11.9.3? Seems a waste to run a 64bit app for memory that small. > I'm running in to two performance related problems: What was performance like on the Sparc10? Hardware-wise it's dog slow compared to the E250. > 1) High throughput inserts/updates of tables. We have a data feed that > pumps data into the database at a high rate of speed. We have more memory > and faster disks on the new system, And faster network - full duplex hme compared to half duplex le. > but the rate at which the inserts/updates occur is the same as the > old configuration. I've setup a 16k buffer pool, and the set > logiosize on the two databases that have the high number of inserts/updates. > I've also set row level locking on the two databases. After these changes, > I still see no improvement over the old configuration. I've check sp_sysmon > and nothing seems to be wrong, but granted I'm not at expert at reading the > output of sp_sysmon. I get the impression of some head scratching going on. The options you changed had no effect, so they weren't needed to be changed. Remember to se them back later. > Anything I should specifically be looking for? Check your version first. Make sure you're on the latest EBF. Compare show plans between the two servers. > 2) Slow querying of tables involving sql with variables. I have the two > following queries: > > query 1: > > select min(trade_time), max(trade_time) > from history..underlying_ticks_fp > where symbol = "SP" and instrument = "F" > and exp_year = 100 and exp_month = 12 > and price_type = "T" and price > 0 > and trade_time between "7/31/2000 00:00:00" and "8/1/2000 00:00:00" Er ... this is a select statement and not a DML statement. Look at your schema design. What and how many indexes do you have on this table? > query 2: > > declare @symbol symbol > declare @exp_year int > declare @exp_month int > declare @start_time datetime > declare @end_time datetime > select @symbol = "SP" > select @exp_year = 100 > select @exp_month = 12 > select @start_time = "7/31/2000 00:00:00" > select @end_time = "8/1/2000 00:00:00" > select min(trade_time), max(trade_time) > from history..underlying_ticks_fp > where symbol = @symbol and instrument = "F" > and price_type = "T" and exp_month = @exp_month > and exp_year = @exp_year and trade_time > between @start_time and @end_time Is this done within a stored procedure? > Now, in the new configuration, using set statistics time on, query 1 > takes 30ms and query 2 takes 33420 ms. In the old configuration, both > query 1 and query 2 took 6 ms. The table schemas between the new/old > configurations are the same as well as the data in the tables. Try writing an sproc and pass the variables in as parameters. > I've run update all statistics on all affected tables in the new > configuration. I realize that the optimizer doesn't know what > the variables are at compile time in query 2, however, in the old > configuration of ASE it didn't seem to have the same sort of performance > problem. > > Any ideas on what's going on for both of these problems would be greatly > appreciated. Eric Miner will probably jump in here once he's back from TechWave. Expect to read some comments from him about running "update all stats". -am
![]() |
0 |
![]() |
Hello Han, My first suspicion here is mismatched datatypes between the columns and the variables' declared types. Check this, make sure the declared types are the same as the columns'. There was a change in 11.5 in how type hierarchies are processed before going to the optimizer. Let us know the result of this. There is no change in how unknown values at runtime are handled in the optimizer. The only thing that changed in this area is that you can now change the value of the old 'magic numbers' from their default. >I've run update all statistics on all affected tables Anthony knows me too well ;-) There is RARELY any need to run update all stats. In fact I suggest you don't even consider it until you've done full testing. The time it takes to run and the maintenance it adds can be overwhelming. Hope this helps Eric Miner ASE Engineering Optimizer Group Han Tunca wrote: > 2) Slow querying of tables involving sql with variables. I have the two > following > queries: > > query 1: > > select min(trade_time), max(trade_time) > from history..underlying_ticks_fp > where symbol = "SP" and instrument = "F" > and exp_year = 100 and exp_month = 12 > and price_type = "T" and price > 0 > and trade_time between "7/31/2000 00:00:00" and "8/1/2000 00:00:00" > > query 2: > > declare @symbol symbol > declare @exp_year int > declare @exp_month int > declare @start_time datetime > declare @end_time datetime > select @symbol = "SP" > select @exp_year = 100 > select @exp_month = 12 > select @start_time = "7/31/2000 00:00:00" > select @end_time = "8/1/2000 00:00:00" > select min(trade_time), max(trade_time) > from history..underlying_ticks_fp > where symbol = @symbol and instrument = "F" > and price_type = "T" and exp_month = @exp_month > and exp_year = @exp_year and trade_time > between @start_time and @end_time > > Now, in the new configuration, using set statistics time on, query 1 > takes 30ms > and query 2 takes 33420 ms. In the old configuration, both query 1 > and query 2 > took 6 ms. The table schemas between the new/old configurations are > the same as > well as the data in the tables. I've run update all statistics on > all affected > tables in the new configuration. I realize that the optimizer > doesn't know what > the variables are at compile time in query 2, however, in the old > configuration of > ASE it didn't seem to have the same sort of performance problem. > > Any ideas on what's going on for both of these problems would be greatly > appreciated. > -- > ____________________ __ ____________________ > \_____________ \ /_ \ / _____________/ > \_____________ \_______/ \_______/ _____________/ > \_________________________ _________________________/ > /______\ > Han Tunca > Tradelink L.L.C. > han@trdlnk.com
![]() |
0 |
![]() |
> declare @exp_year int > declare @exp_month int Eric is right, check exp_year and exp_month data types. I had the similar case last week, I bet exp_year and exp_month are smallint or tinyint. But I wonder why optimizer is getting dumb on this case....
![]() |
0 |
![]() |
Han, Let's look at your queries. > query 1: > > select min(trade_time), max(trade_time) > from history..underlying_ticks_fp > where symbol = "SP" and instrument = "F" > and exp_year = 100 and exp_month = 12 > and price_type = "T" and price > 0 > and trade_time between "7/31/2000 00:00:00" and "8/1/2000 00:00:00" Your first query is performing a cardinal sin. NEVER perform a min and max in the same SQL statement because it will perform an index leaf scan on every page. If you separate the min and max into two queries, two point queries will be performed for those functions and you will get infinitely better performance. This is known as min and max optimisation. > query 2: > > declare @symbol symbol > declare @exp_year int > declare @exp_month int > declare @start_time datetime > declare @end_time datetime > select @symbol = "SP" > select @exp_year = 100 > select @exp_month = 12 > select @start_time = "7/31/2000 00:00:00" > select @end_time = "8/1/2000 00:00:00" > select min(trade_time), max(trade_time) > from history..underlying_ticks_fp > where symbol = @symbol and instrument = "F" > and price_type = "T" and exp_month = @exp_month > and exp_year = @exp_year and trade_time > between @start_time and @end_time Is this in a stored procedure? If so, the optimiser cannot resolve @symbol, @exp_month, @exp_year, @start_time and @end_tme until the proc is run as the value is unknown. It will not use statistics and therefore it will end up using defaults. If you declare the variables in another proc and pass the values by parameter, the called proc can be optimised with known values. Note that if the values vary such that different query plans can result, look at creating the procs with the 'with recompile' option. Andy -- Andy Price Sybase Certified Professional - Adaptive Server DBA v12.0 Sybase Certified Associate - Adaptive Server DBA v12.0 & v11.5 Sybase Certified Associate - SQL Developer v11.5 "Han Tunca" <han@trdlnk.com> wrote in message news:39883BE5.61C5A12@trdlnk.com... > At our site, we just upgraded from 11.0.3.3 to 11.9.3 - in the process > we changed machines, so: > > old configuration: > > ASE 11.0.3.3 on a Sparc 10 running Solaris 2.5.1, 1 processor, 100 Mb > allocated to sybase > > new configuration: > > ASE 11.9.3 on a Sun E250 running Solaris 7, 2 processors, 200 Mb > allocated to sybase. > > I'm running in to two performance related problems: > > 1) High throughput inserts/updates of tables. We have a data feed that > pumps data into > the database at a high rate of speed. We have more memory and faster > disks on the > new system, but the rate at which the inserts/updates occur is the > same as the > old configuration. I've setup a 16k buffer pool, and the set > logiosize on the two > databases that have the high number of inserts/updates. I've also > set row level > locking on the two databases. After these changes, I still see no > improvement over > the old configuration. I've check sp_sysmon and nothing seems to be > wrong, but > granted I'm not at expert at reading the output of sp_sysmon. > Anything I should > specifically be looking for? > > 2) Slow querying of tables involving sql with variables. I have the two > following > queries: > > query 1: > > select min(trade_time), max(trade_time) > from history..underlying_ticks_fp > where symbol = "SP" and instrument = "F" > and exp_year = 100 and exp_month = 12 > and price_type = "T" and price > 0 > and trade_time between "7/31/2000 00:00:00" and "8/1/2000 00:00:00" > > query 2: > > declare @symbol symbol > declare @exp_year int > declare @exp_month int > declare @start_time datetime > declare @end_time datetime > select @symbol = "SP" > select @exp_year = 100 > select @exp_month = 12 > select @start_time = "7/31/2000 00:00:00" > select @end_time = "8/1/2000 00:00:00" > select min(trade_time), max(trade_time) > from history..underlying_ticks_fp > where symbol = @symbol and instrument = "F" > and price_type = "T" and exp_month = @exp_month > and exp_year = @exp_year and trade_time > between @start_time and @end_time > > Now, in the new configuration, using set statistics time on, query 1 > takes 30ms > and query 2 takes 33420 ms. In the old configuration, both query 1 > and query 2 > took 6 ms. The table schemas between the new/old configurations are > the same as > well as the data in the tables. I've run update all statistics on > all affected > tables in the new configuration. I realize that the optimizer > doesn't know what > the variables are at compile time in query 2, however, in the old > configuration of > ASE it didn't seem to have the same sort of performance problem. > > Any ideas on what's going on for both of these problems would be greatly > appreciated. > -- > ____________________ __ ____________________ > \_____________ \ /_ \ / _____________/ > \_____________ \_______/ \_______/ _____________/ > \_________________________ _________________________/ > /______\ > Han Tunca > Tradelink L.L.C. > han@trdlnk.com
![]() |
0 |
![]() |
Andy Price wrote: > > Han, > > Let's look at your queries. > > > query 1: > > > > select min(trade_time), max(trade_time) > > from history..underlying_ticks_fp > > where symbol = "SP" and instrument = "F" > > and exp_year = 100 and exp_month = 12 > > and price_type = "T" and price > 0 > > and trade_time between "7/31/2000 00:00:00" and "8/1/2000 00:00:00" > > Your first query is performing a cardinal sin. NEVER perform a min and max > in the same SQL statement because it will perform an index leaf scan on > every page. If you separate the min and max into two queries, two point > queries will be performed for those functions and you will get infinitely > better performance. This is known as min and max optimisation. This wasn't my concern - my concern was more with query 2 being infinitely slower on a 11.9.3 database as compared to the 11.0.3.3 database. After taking Eric's advice, I double checked the variable types and indeed found they did not match up with the column types - I changed them and now get the performance I saw in 11.0.3.3. As another user asked, why the change and is it documented? - this almost seems like a bug. The other problem I'm still having is slower inserts/updates on tables - I have two small tables that recieve a stream of data throughout the day. On the new server, which has a faster network connection, faster disks, more memory and more processors, I get a slower insert/update throughput than I do for the old system running 11.0.3.3. Anybody have any pointers? I've looked at sp_sysmon, changed some configuration params, etc, and it still is nowhere near the speed of the old system. > > > query 2: > > > > declare @symbol symbol > > declare @exp_year int > > declare @exp_month int > > declare @start_time datetime > > declare @end_time datetime > > select @symbol = "SP" > > select @exp_year = 100 > > select @exp_month = 12 > > select @start_time = "7/31/2000 00:00:00" > > select @end_time = "8/1/2000 00:00:00" > > select min(trade_time), max(trade_time) > > from history..underlying_ticks_fp > > where symbol = @symbol and instrument = "F" > > and price_type = "T" and exp_month = @exp_month > > and exp_year = @exp_year and trade_time > > between @start_time and @end_time > > Is this in a stored procedure? If so, the optimiser cannot resolve @symbol, > @exp_month, @exp_year, @start_time and @end_tme until the proc is run as the > value is unknown. It will not use statistics and therefore it will end up > using defaults. If you declare the variables in another proc and pass the > values by parameter, the called proc can be optimised with known values. > Note that if the values vary such that different query plans can result, > look at creating the procs with the 'with recompile' option. > > Andy > > -- > > Andy Price > Sybase Certified Professional - Adaptive Server DBA v12.0 > Sybase Certified Associate - Adaptive Server DBA v12.0 & v11.5 > Sybase Certified Associate - SQL Developer v11.5 > > "Han Tunca" <han@trdlnk.com> wrote in message > news:39883BE5.61C5A12@trdlnk.com... > > At our site, we just upgraded from 11.0.3.3 to 11.9.3 - in the process > > we changed machines, so: > > > > old configuration: > > > > ASE 11.0.3.3 on a Sparc 10 running Solaris 2.5.1, 1 processor, 100 Mb > > allocated to sybase > > > > new configuration: > > > > ASE 11.9.3 on a Sun E250 running Solaris 7, 2 processors, 200 Mb > > allocated to sybase. > > > > I'm running in to two performance related problems: > > > > 1) High throughput inserts/updates of tables. We have a data feed that > > pumps data into > > the database at a high rate of speed. We have more memory and faster > > disks on the > > new system, but the rate at which the inserts/updates occur is the > > same as the > > old configuration. I've setup a 16k buffer pool, and the set > > logiosize on the two > > databases that have the high number of inserts/updates. I've also > > set row level > > locking on the two databases. After these changes, I still see no > > improvement over > > the old configuration. I've check sp_sysmon and nothing seems to be > > wrong, but > > granted I'm not at expert at reading the output of sp_sysmon. > > Anything I should > > specifically be looking for? > > > > 2) Slow querying of tables involving sql with variables. I have the two > > following > > queries: > > > > query 1: > > > > select min(trade_time), max(trade_time) > > from history..underlying_ticks_fp > > where symbol = "SP" and instrument = "F" > > and exp_year = 100 and exp_month = 12 > > and price_type = "T" and price > 0 > > and trade_time between "7/31/2000 00:00:00" and "8/1/2000 00:00:00" > > > > query 2: > > > > declare @symbol symbol > > declare @exp_year int > > declare @exp_month int > > declare @start_time datetime > > declare @end_time datetime > > select @symbol = "SP" > > select @exp_year = 100 > > select @exp_month = 12 > > select @start_time = "7/31/2000 00:00:00" > > select @end_time = "8/1/2000 00:00:00" > > select min(trade_time), max(trade_time) > > from history..underlying_ticks_fp > > where symbol = @symbol and instrument = "F" > > and price_type = "T" and exp_month = @exp_month > > and exp_year = @exp_year and trade_time > > between @start_time and @end_time > > > > Now, in the new configuration, using set statistics time on, query 1 > > takes 30ms > > and query 2 takes 33420 ms. In the old configuration, both query 1 > > and query 2 > > took 6 ms. The table schemas between the new/old configurations are > > the same as > > well as the data in the tables. I've run update all statistics on > > all affected > > tables in the new configuration. I realize that the optimizer > > doesn't know what > > the variables are at compile time in query 2, however, in the old > > configuration of > > ASE it didn't seem to have the same sort of performance problem. > > > > Any ideas on what's going on for both of these problems would be greatly > > appreciated. > > -- > > ____________________ __ ____________________ > > \_____________ \ /_ \ / _____________/ > > \_____________ \_______/ \_______/ _____________/ > > \_________________________ _________________________/ > > /______\ > > Han Tunca > > Tradelink L.L.C. > > han@trdlnk.com -- ____________________ __ ____________________ \_____________ \ /_ \ / _____________/ \_____________ \_______/ \_______/ _____________/ \_________________________ _________________________/ /______\ Han Tunca Tradelink L.L.C. han@trdlnk.com
![]() |
0 |
![]() |
Taking Erics point a bit further regarding datatypes matching, the issue with the variable types is to do with the datatype hierarchy and determines how indexes will be used with sargs or joins. For sargs, they don't necessarily have to match, but the datatype needs to come before it in the hierarchy if they don't. There are some exceptions, such as char and varchar being treated the same, but you can read up on this in the P&T guide if you want more information. If you are interested, you can create the datatype consolidation hierarchy from the systypes table by doing something like this: select name, length, hierarchy from systypes order by 3 I know this is basic stuff and many of you will already know this, but it may help as a refresher. Andy "Han Tunca" <han@trdlnk.com> wrote in message news:3990468E.DCCF704C@trdlnk.com... > Andy Price wrote: > > > > Han, > > > > Let's look at your queries. > > > > > query 1: > > > > > > select min(trade_time), max(trade_time) > > > from history..underlying_ticks_fp > > > where symbol = "SP" and instrument = "F" > > > and exp_year = 100 and exp_month = 12 > > > and price_type = "T" and price > 0 > > > and trade_time between "7/31/2000 00:00:00" and "8/1/2000 00:00:00" > > > > Your first query is performing a cardinal sin. NEVER perform a min and max > > in the same SQL statement because it will perform an index leaf scan on > > every page. If you separate the min and max into two queries, two point > > queries will be performed for those functions and you will get infinitely > > better performance. This is known as min and max optimisation. > > This wasn't my concern - my concern was more with query 2 being > infinitely > slower on a 11.9.3 database as compared to the 11.0.3.3 database. After > taking > Eric's advice, I double checked the variable types and indeed found they > did > not match up with the column types - I changed them and now get the > performance > I saw in 11.0.3.3. As another user asked, why the change and is it > documented? - this > almost seems like a bug. > > The other problem I'm still having is slower inserts/updates on tables - > I have > two small tables that recieve a stream of data throughout the day. On > the new server, > which has a faster network connection, faster disks, more memory and > more processors, > I get a slower insert/update throughput than I do for the old system > running 11.0.3.3. > Anybody have any pointers? I've looked at sp_sysmon, changed some > configuration > params, etc, and it still is nowhere near the speed of the old system. > > > > > > query 2: > > > > > > declare @symbol symbol > > > declare @exp_year int > > > declare @exp_month int > > > declare @start_time datetime > > > declare @end_time datetime > > > select @symbol = "SP" > > > select @exp_year = 100 > > > select @exp_month = 12 > > > select @start_time = "7/31/2000 00:00:00" > > > select @end_time = "8/1/2000 00:00:00" > > > select min(trade_time), max(trade_time) > > > from history..underlying_ticks_fp > > > where symbol = @symbol and instrument = "F" > > > and price_type = "T" and exp_month = @exp_month > > > and exp_year = @exp_year and trade_time > > > between @start_time and @end_time > > > > Is this in a stored procedure? If so, the optimiser cannot resolve @symbol, > > @exp_month, @exp_year, @start_time and @end_tme until the proc is run as the > > value is unknown. It will not use statistics and therefore it will end up > > using defaults. If you declare the variables in another proc and pass the > > values by parameter, the called proc can be optimised with known values. > > Note that if the values vary such that different query plans can result, > > look at creating the procs with the 'with recompile' option. > > > > Andy > > > > -- > > > > Andy Price > > Sybase Certified Professional - Adaptive Server DBA v12.0 > > Sybase Certified Associate - Adaptive Server DBA v12.0 & v11.5 > > Sybase Certified Associate - SQL Developer v11.5 > > > > "Han Tunca" <han@trdlnk.com> wrote in message > > news:39883BE5.61C5A12@trdlnk.com... > > > At our site, we just upgraded from 11.0.3.3 to 11.9.3 - in the process > > > we changed machines, so: > > > > > > old configuration: > > > > > > ASE 11.0.3.3 on a Sparc 10 running Solaris 2.5.1, 1 processor, 100 Mb > > > allocated to sybase > > > > > > new configuration: > > > > > > ASE 11.9.3 on a Sun E250 running Solaris 7, 2 processors, 200 Mb > > > allocated to sybase. > > > > > > I'm running in to two performance related problems: > > > > > > 1) High throughput inserts/updates of tables. We have a data feed that > > > pumps data into > > > the database at a high rate of speed. We have more memory and faster > > > disks on the > > > new system, but the rate at which the inserts/updates occur is the > > > same as the > > > old configuration. I've setup a 16k buffer pool, and the set > > > logiosize on the two > > > databases that have the high number of inserts/updates. I've also > > > set row level > > > locking on the two databases. After these changes, I still see no > > > improvement over > > > the old configuration. I've check sp_sysmon and nothing seems to be > > > wrong, but > > > granted I'm not at expert at reading the output of sp_sysmon. > > > Anything I should > > > specifically be looking for? > > > > > > 2) Slow querying of tables involving sql with variables. I have the two > > > following > > > queries: > > > > > > query 1: > > > > > > select min(trade_time), max(trade_time) > > > from history..underlying_ticks_fp > > > where symbol = "SP" and instrument = "F" > > > and exp_year = 100 and exp_month = 12 > > > and price_type = "T" and price > 0 > > > and trade_time between "7/31/2000 00:00:00" and "8/1/2000 00:00:00" > > > > > > query 2: > > > > > > declare @symbol symbol > > > declare @exp_year int > > > declare @exp_month int > > > declare @start_time datetime > > > declare @end_time datetime > > > select @symbol = "SP" > > > select @exp_year = 100 > > > select @exp_month = 12 > > > select @start_time = "7/31/2000 00:00:00" > > > select @end_time = "8/1/2000 00:00:00" > > > select min(trade_time), max(trade_time) > > > from history..underlying_ticks_fp > > > where symbol = @symbol and instrument = "F" > > > and price_type = "T" and exp_month = @exp_month > > > and exp_year = @exp_year and trade_time > > > between @start_time and @end_time > > > > > > Now, in the new configuration, using set statistics time on, query 1 > > > takes 30ms > > > and query 2 takes 33420 ms. In the old configuration, both query 1 > > > and query 2 > > > took 6 ms. The table schemas between the new/old configurations are > > > the same as > > > well as the data in the tables. I've run update all statistics on > > > all affected > > > tables in the new configuration. I realize that the optimizer > > > doesn't know what > > > the variables are at compile time in query 2, however, in the old > > > configuration of > > > ASE it didn't seem to have the same sort of performance problem. > > > > > > Any ideas on what's going on for both of these problems would be greatly > > > appreciated. > > > -- > > > ____________________ __ ____________________ > > > \_____________ \ /_ \ / _____________/ > > > \_____________ \_______/ \_______/ _____________/ > > > \_________________________ _________________________/ > > > /______\ > > > Han Tunca > > > Tradelink L.L.C. > > > han@trdlnk.com > > -- > ____________________ __ ____________________ > \_____________ \ /_ \ / _____________/ > \_____________ \_______/ \_______/ _____________/ > \_________________________ _________________________/ > /______\ > Han Tunca > Tradelink L.L.C. > han@trdlnk.com
![]() |
0 |
![]() |
Han Tunca wrote: > The other problem I'm still having is slower inserts/updates on tables - > I have > two small tables that recieve a stream of data throughout the day. On > the new server, > which has a faster network connection, faster disks, more memory and > more processors, > I get a slower insert/update throughput than I do for the old system > running 11.0.3.3. > Anybody have any pointers? I've looked at sp_sysmon, changed some > configuration > params, etc, and it still is nowhere near the speed of the old system. I'd suggest looking at the showplan on both servers. Check that both are running the same way. If not, find out why. I suspect that it could be an index issue. Also check the datatypes just to be sure. -am
![]() |
0 |
![]() |