Hi, since some versions of delphi there are some classes which trying to mimic functionality of the .net framework (eg. TFile, TDirectory or TPath). I used TTextReader (or the descandent TStreamReader for parsing files [json]) and wrote a parser for .net and for delphi. Guess what? .net outperforms delphi when using a TStreamReader. At first i thought it is my parsing code in delphi which causes this massive difference. Parsing a 180mb json with .net took ~8sec but with delphi it took ~30sec. I traced it down to TStreamReader. Try it yourself with TStopwatch.StartNew do begin try with TFile.OpenText(insertAPathToAVeryBigTextFile) do begin try while Read <> -1 do ; finally Free; end; end; finally writeln(IntToStr(ElapsedMilliseconds)); ReadLn; end; end; use the same file with the c# code Stopwatch watch = Stopwatch.StartNew(); try { using (var reader = File.OpenText(insertAPathToAVeryBigTextFile)) { while (reader.Read() != -1) { } } } finally { Console.WriteLine(watch.ElapsedMilliseconds); Console.ReadLine(); } Please dont argue against the with's in the delphi code. It is only there to condense the code. With my 180mb json the .net code executes in ~900ms. The delphi sample executes in ~20000ms. The problem with the delphi TStreamReader is that it uses such a dumb buffering method. It reads the data in chunks because it has to convert the data if it needs to. So it buffers the read and converted data. So now we are issuing a read. It takes the corresponding amount from the buffer and then deletes that amount from the buffer. How this is done? It MOVES the rest of the buffer. So if you have a buffer size of 4096 bytes and you read a UTF-16 codepoint it will move 4094 bytes. That is done on each read. I'm really appalled how embt can let such unoptimized code go public. That is code from the runtime library! In my case i use my own implementation for TStreamReader. Its a shame. Soeren
![]() |
0 |
![]() |
> I'm really appalled how embt can let such unoptimized code go public. > That is code from the runtime library! There is a lot of inefficient code in the RTL. Have a look at StringReplace, the "updated" TStream in XE3, or TStringHelper in XE3. And there is a lot of inefficient code at a lower level as well: interfaced objects initialization done for each instance (instead of once par class), RTTI-based initialization and finalization for records and static arrays, etc. > Parsing a 180mb json with .net took ~8sec but with delphi > it took ~30sec If you've got time, I would like to know how long it takes with dwsJSON? Eric
![]() |
0 |
![]() |
I assume you are using the San Fran. GIS json as sample as its 189mb? Anyway.. I tried to run a test using kbmMWs JSON parser on a 3 year old machine that is doing nothing but annoying me due to slow speed :) The JSON parsed in 18 secs (compiled with XE2). It use TFileStream for data access. If I use a TkbmMWBufferedFileStream class, which bypasses TFileStream and disables the windows cache and uses its own, then it takes 17 secs with a cache of 1mb and up (optionally split into pages). If I use a TkbmMWBufferedFileStream class with windows cache enabled, then it takes slightly more than 17 secs. The JSON file is completely parsed into directly accessible objects. I suspect one performance hinderance in my setup is the dead slow machine, with its dead slow hd, but I also wonder if there is a difference in how ..Net have parsed the JSON object compared to my situation... I have a completely parsed JSON file with all objects instantiated and directly accessible. Do .Net do some late parsing upon accessing nodes? -- best regards Kim Madsen TeamC4D www.components4developers.com The best components for the best developers High availability, high performance loosely coupled n-tier products "Soeren Muehlbauer" <[email protected]> skrev i meddelelsen news:[email protected] > Hi, > > since some versions of delphi there are some classes which trying to > mimic functionality of the .net framework (eg. TFile, TDirectory or > TPath). > > I used TTextReader (or the descandent TStreamReader for parsing files > [json]) and wrote a parser for .net and for delphi. Guess what? .net > outperforms delphi when using a TStreamReader. At first i thought it is > my parsing code in delphi which causes this massive difference. Parsing > a 180mb json with .net took ~8sec but with delphi it took ~30sec. > > I traced it down to TStreamReader. Try it yourself > > with TStopwatch.StartNew do > begin > try > with TFile.OpenText(insertAPathToAVeryBigTextFile) do > begin > try > while Read <> -1 do ; > finally > Free; > end; > end; > finally > writeln(IntToStr(ElapsedMilliseconds)); > ReadLn; > end; > end; > > use the same file with the c# code > > Stopwatch watch = Stopwatch.StartNew(); > try > { > using (var reader = > File.OpenText(insertAPathToAVeryBigTextFile)) > { > while (reader.Read() != -1) > { > } > } > } > finally > { > Console.WriteLine(watch.ElapsedMilliseconds); > Console.ReadLine(); > } > > Please dont argue against the with's in the delphi code. It is only > there to condense the code. > > With my 180mb json the .net code executes in ~900ms. The delphi sample > executes in ~20000ms. > > The problem with the delphi TStreamReader is that it uses such a dumb > buffering method. It reads the data in chunks because it has to convert > the data if it needs to. So it buffers the read and converted data. So > now we are issuing a read. It takes the corresponding amount from the > buffer and then deletes that amount from the buffer. How this is done? > It MOVES the rest of the buffer. So if you have a buffer size of 4096 > bytes and you read a UTF-16 codepoint it will move 4094 bytes. That is > done on each read. > > I'm really appalled how embt can let such unoptimized code go public. > That is code from the runtime library! > > In my case i use my own implementation for TStreamReader. Its a shame. > > Soeren
![]() |
0 |
![]() |
Hi, > > Parsing a 180mb json with .net took ~8sec but with delphi > > it took ~30sec > > If you've got time, I would like to know how long it takes with dwsJSON? I was in the need for a sax parser, because the json is so big and i only needed a portion of the info in there. In the end i used a parse which was written in c and link the compiled obj file. BTW, dwsJson can read the file in ~4400ms. Not bad. Really. It uses (in release mode) a peak of ~1GB of ram. After reading is done it uses ~850MB of ram. Soeren.
![]() |
0 |
![]() |
"Kim Madsen" <[email protected]> wrote in message news:[email protected] > I suspect one performance hinderance in my setup is the dead slow machine, > with its dead slow hd, but I also wonder if there is a difference in how Can you mitigate the hard drive by doing in > 1 times successively, and see how subsequent compare to first (after boot, or much other disk access)?
![]() |
0 |
![]() |
Hi, > I assume you are using the San Fran. GIS json as sample as its 189mb? Yes;-). > The JSON file is completely parsed into directly accessible objects. As i replied to eric, i only need small portions of the json files, so i needed a sax parser. The one i currently use reads and _parses_ the complete file in under 3 seconds. > I suspect one performance hinderance in my setup is the dead slow machine, > with its dead slow hd, but I also wonder if there is a difference in how > .Net have parsed the JSON object compared to my situation... I have a > completely parsed JSON file with all objects instantiated and directly > accessible. I think the parsing itself is not the problem. > Do .Net do some late parsing upon accessing nodes? I used a self written json parser, so i cant comment on .net itself. Soeren.
![]() |
0 |
![]() |
Hi, > Do .Net do some late parsing upon accessing nodes? The json library which i use for .net is from json.net. It's sax parser parses the file in ~8sec. The reader which constructs objects is doing this in ~20sec. But once more. That was not my point. My point is that TStreamReader is so abnormal slow. Soeren.
![]() |
0 |
![]() |
Eric Grange wrote: > the "updated" TStream in XE3 They butchered TStream in XE3??? Doesn't this have a tremendous impact on a vast amount of applications? Am using TStreams quite often.
![]() |
0 |
![]() |
> I was in the need for a sax parser, because the json is so big and i > only needed a portion of the info in there. Yes, that makes sense. While a full-blown SAX parser isn't on the dwsJSON roadmap, a subset parser is, for that kind of problem, though it's more of a vague plan at the moment. Is that JSON or what produces it publicly available? I'm on the lookout for large/complex test/benchmark cases. > BTW, dwsJson can read the file in ~4400ms. Not bad. Really. It uses (in > release mode) a peak of ~1GB of ram. After reading is done it uses > ~850MB of ram. Thanks for the info! I suppose there are many elements in the JSON rather than large strings, in that case TObject & String allocation overhead becomes significant. Eric
![]() |
0 |
![]() |
> They butchered TStream in XE3??? See "Intriguing changes to TStream in XE3" in the rtl newsgroup. (and weep) Eric
![]() |
0 |
![]() |
> {quote:title=Dominique Willems wrote:}{quote} > Eric Grange wrote: > > the "updated" TStream in XE3 > > They butchered TStream in XE3??? > > Doesn't this have a tremendous impact on a vast amount of applications? > Am using TStreams quite often. They butchered a whole lot of stuff in XE3. Dalija Prasnikar
![]() |
0 |
![]() |
Eric Grange wrote: > See "Intriguing changes to TStream in XE3" in the rtl newsgroup. One of the most depressing reads in a long time. Has this been QC'd?
![]() |
0 |
![]() |
Dalija Prasnikar wrote: > They butchered a whole lot of stuff in XE3. If this is in an attempt to make everything cross-platform, that's the wrong track. I'm with the (ever diminishing) crowd who believes in developing specifically for one platform, for the simple reason that differences have a reason, and I'm not interested in developing for smallest common denominators.
![]() |
0 |
![]() |
> One of the most depressing reads in a long time. Has this been QC'd? Dunno, you could try to QC if you believe in QC. But since this code was pre-existing, it must have been done on purpose, so will probably be closed with "As Designed" or "Won't do". Eric
![]() |
0 |
![]() |
> {quote:title=Dominique Willems wrote:}{quote} > Dalija Prasnikar wrote: > > They butchered a whole lot of stuff in XE3. > > If this is in an attempt to make everything cross-platform, that's the > wrong track. I'm with the (ever diminishing) crowd who believes in > developing specifically for one platform, for the simple reason that > differences have a reason, and I'm not interested in developing for > smallest common denominators. No, they did it because of ... I don't know... I guess neither do they... Dalija Prasnikar
![]() |
0 |
![]() |
Hello, > {quote:title=Eric Grange wrote:}{quote} > > And there is a lot of inefficient code at a lower level as well: > interfaced objects initialization done for each instance (instead of > once par class), RTTI-based initialization and finalization for records > and static arrays, etc. agreed, however the impact of these is relatively low. I recall Andreas Hausladen having a JIT-compiled record initializer working at some point but stopped his efforts because the effect was negligible. -- Moritz "Hey, it compiles! Ship it!"
![]() |
0 |
![]() |
> If this is in an attempt to make everything cross-platform, that's the > wrong track. The only explanation I have is that they're rushing DotNetification and immutability, though they end up overdoing these in place where they don't make sense, and the implementations are written by a mix of newbies and pros whose only requirement seems to be "get it working". They're aiming for quantity. However a lot of that "working" code (which doesn't always work) is essentially garbage form a performance point of view, and will have to be thrown away and rewritten from scratch to be competitive. I don't understand why they seem hell-bent on negating all the advantages of native compilation? LLVM won't help them out of that quagmire, they've already fallen behind JavaScript, and at that pace, they could have trouble competing with Ruby. Eric
![]() |
0 |
![]() |
Soeren, > With my 180mb json the .net code executes in ~900ms. The delphi sample > executes in ~20000ms. the awesomeness of "native code" ;)
![]() |
0 |
![]() |
> agreed, however the impact of these is relatively low. It depends what you do. I'm regularly seeing them crop up the profiler charts. They're also the reasons why some designs aren't practical in Delphi (and aren't attempted), the reason why Variant requires so much compiler magic, and TValue is inefficient despite involving arcane hacks. If those points were taken care of, TValue would be faster and much simpler. They're also the reason why if you try to emulate a value-type binary buffer with a TBytes in a record, you'll get a convenient syntax with atrocious performance. Eric
![]() |
0 |
![]() |
Moritz Beutel wrote: > the effect was negligible. .... if used in the IDE's code because the IDE doesn't use "records" that often. My "designed" sample became a lot faster, but it was designed to be slow with the RTTI approach. -- Andreas Hausladen
![]() |
0 |
![]() |
Hi Marc, >> With my 180mb json the .net code executes in ~900ms. The delphi sample >> executes in ~20000ms. > > the awesomeness of "native code" ;) ;-). I have to say that ms has done a great job on the core assemblies. most of the code is optimized to run as fast as its possible. A shame that embt cant get it right. I like the abstraction which TTextReader provides. All the encoding stuff is abstracted away. But with the current implementation of such a basic class like TStreamReader its nearly useless. Soeren.
![]() |
0 |
![]() |
Hi, > Is that JSON or what produces it publicly available? I'm on the lookout > for large/complex test/benchmark cases. Yes. You can get it from https://github.com/zeMirco/sf-city-lots-json. Simply download it as zip. >> BTW, dwsJson can read the file in ~4400ms. Not bad. Really. It uses (in >> release mode) a peak of ~1GB of ram. After reading is done it uses >> ~850MB of ram. > > Thanks for the info! > > I suppose there are many elements in the JSON rather than large strings, > in that case TObject & String allocation overhead becomes significant. Yes, there are many floats and small strings. Soeren.
![]() |
0 |
![]() |
Hi, > If this is in an attempt to make everything cross-platform, that's the > wrong track. Thats what i thought when looking at the TStream implementation in XE3. If a simple pascal implementation isnt possible for all platforms then they should write a optimized implementation for each platform. The current way to use TBytes is bad, slow and simply wrong. > [..] and I'm not interested in developing for > smallest common denominators. +1. Soeren.
![]() |
0 |
![]() |
On 09/04/2013 14:39, Kim Madsen wrote: > I assume you are using the San Fran. GIS json as sample as its 189mb? > Anyway.. I tried to run a test using kbmMWs JSON parser on a 3 year old > machine that is doing nothing but annoying me due to slow speed :) > The JSON parsed in 18 secs (compiled with XE2). It use TFileStream for data > access. > If I use a TkbmMWBufferedFileStream class, which bypasses TFileStream and > disables the windows cache and uses its own, then it takes 17 secs with a > cache of 1mb and up (optionally split into pages). > If I use a TkbmMWBufferedFileStream class with windows cache enabled, then > it takes slightly more than 17 secs. > > The JSON file is completely parsed into directly accessible objects. > I suspect one performance hinderance in my setup is the dead slow machine, > with its dead slow hd, but I also wonder if there is a difference in how > .Net have parsed the JSON object compared to my situation... I have a > completely parsed JSON file with all objects instantiated and directly > accessible. > > Do .Net do some late parsing upon accessing nodes? > > If you don't have TkbmMWBufferedFileStream one can always use my buffered file streams: http://stackoverflow.com/questions/5639531/buffered-files-for-faster-disk-access/5639712#5639712 @Kim What exactly do you mean by disabling the windows cache? Do you mean the disk cache? I would have thought that would be an exceptionally bad idea.
![]() |
0 |
![]() |
On 09/04/2013 16:01, Eric Grange wrote: >> They butchered TStream in XE3??? > > See "Intriguing changes to TStream in XE3" in the rtl newsgroup. > (and weep) > > Eric > I'm blubbing like a baby right now
![]() |
0 |
![]() |
Take a look at http://blog.synopse.info/post/2011/06/02/Fast-JSON-parsing This is the kernel of a sax-like parser and in-place decoder. It is pretty fast, and do all the conversion in-place, so avoid most memory allocation. As you discovered, the official Delphi rtl can be pretty slow. But you can achieve amazing speed with proper Delphi coding. We benchmarked our core units to be fast also in Win64 mode, since we by-pass most of the rtl calls. I use both .net and Delphi in a daily basis, and still is not convinced by the .net approach. My collegues could tell you how many "wtf" they hear whole day... Of course, I'm less fluent in .net, but I still miss the Kiss principle of Delphi. When Delphi mimics .net, it is imho a mistake... the original is better! But when you try to do it the "object pascal way", e.g. with the TClass power and its virtual constructors instead of generic abuse, or with the powerful enumerates... Delphi rocks! And speed is awsome...
![]() |
0 |
![]() |
Am 09.04.2013 14:03, schrieb Soeren Muehlbauer: > Hi, > [snip well worked out performance issue] > > The problem with the delphi TStreamReader is that it uses such a dumb > buffering method. It reads the data in chunks because it has to convert > the data if it needs to. So it buffers the read and converted data. So > now we are issuing a read. It takes the corresponding amount from the > buffer and then deletes that amount from the buffer. How this is done? > It MOVES the rest of the buffer. So if you have a buffer size of 4096 > bytes and you read a UTF-16 codepoint it will move 4094 bytes. That is > done on each read. > > I'm really appalled how embt can let such unoptimized code go public. > That is code from the runtime library! > > In my case i use my own implementation for TStreamReader. Its a shame. > Yes it looks like it is, so have you created a QC request about this one? Otherwise nothing might trigger a revisitation of this particular source code as it seemingly works from a purely functional stand point. Greetings Markus
![]() |
0 |
![]() |
Am 09.04.2013 17:13, schrieb Dominique Willems: > Eric Grange wrote: >> See "Intriguing changes to TStream in XE3" in the rtl newsgroup. > > One of the most depressing reads in a long time. Has this been QC'd? > Hello, afaik it has been QC'd by somebody well known in the community and afaik Allen Bauer knows about the issue. So let's hope it will be fixed soon! Greetings Markus
![]() |
0 |
![]() |
> the awesomeness of "native code" ;) Mark. You should be intelligent enough to know that it has nothing to do with native but everything with moronic way it is coded. If the same person wrote an implementation using Oxygen it'd suck just as bad.
![]() |
0 |
![]() |
> {quote:title=Dominique Willems wrote:}{quote} > Eric Grange wrote: > > See "Intriguing changes to TStream in XE3" in the rtl newsgroup. > > One of the most depressing reads in a long time. Has this been QC'd? Look here: http://qc.embarcadero.com/wc/qcmain.aspx?d=114659 "[Regression in XE3] The new implementations of TStream.ReadBuffer and TStream.WriteBuffer are extremely inefficient" /Leif
![]() |
0 |
![]() |
> {quote:title=Konstantine Poukhov wrote:} > If the same person wrote an implementation > using Oxygen it'd suck just as bad. {quote} For sure. That problem happens when you have to reinvent the wheel.
![]() |
0 |
![]() |
Leif Uneus wrote: > "[Regression in XE3] The new implementations of TStream.ReadBuffer > and TStream.WriteBuffer are extremely inefficient" Glad I didn't touch XE3 yet. Would have spent eons programming around this.
![]() |
0 |
![]() |
> {quote:title=Soeren Muehlbauer wrote:}{quote} > Hi, > > > > Parsing a 180mb json with .net took ~8sec but with delphi > > > it took ~30sec > > > > If you've got time, I would like to know how long it takes with dwsJSON? > > I was in the need for a sax parser, because the json is so big and i > only needed a portion of the info in there. In the end i used a parse > which was written in c and link the compiled obj file. > > BTW, dwsJson can read the file in ~4400ms. Not bad. Really. It uses (in > release mode) a peak of ~1GB of ram. After reading is done it uses > ~850MB of ram. > Delphi's not looking so good here. I just tried this with Python, one version behind (3.2) on a two-version behind copy of Linux, 2GB PC, quad-core Athlon 2 processor, 3.2GHz. The file took about 3.5 seconds to read into memory, and parsing took about 18.5 seconds. However, this includes needing to swap 800 megabytes of memory out to disk; it would probably have finished a lot sooner without that overhead. ------- with open('./citylots.json') as inFile: session = inFile.read() import json sessionjson = json.loads(session)
![]() |
0 |
![]() |
> {quote:title=David Heffernan wrote:}{quote} > I'm blubbing like a baby right now But you use Delphi? How do you have any tears left? :-) :-(
![]() |
0 |
![]() |
> @Kim What exactly do you mean by disabling the windows cache? Do you > mean the disk cache? I would have thought that would be an exceptionally > bad idea. Yup.. the windows cache. It would be a bad idea, unless one makes something better for the specific purpose. In our case, the TkbmMWBufferedFileStream actually more than doubles performance in situations, where semi random seeks are required. The reason is that the windows cache doesnt know better than to cache much too much data that is really not needed at the time, why the performance goes way down. On SSD disks the performance improvement can be even better with out approach. And as can be seen, even with sequential reading, where the windows cache is supposed to be better, TkbmMWBufferedFileStream performs slightly better. -- best regards Kim Madsen TeamC4D www.components4developers.com The best components for the best developers High availability, high performance loosely coupled n-tier products
![]() |
0 |
![]() |
> {quote:title=Arnaud BOUCHEZ wrote:}{quote} > As you discovered, the official Delphi rtl can be pretty slow. > But you can achieve amazing speed with proper Delphi coding. > We benchmarked our core units to be fast also in Win64 mode, since we by-pass most of the rtl calls. But the problem there is that development time is then greatly extended since you have to bypass everything and spend an untold amount of time optimizing. > But when you try to do it the "object pascal way", e.g. with the TClass power and its virtual constructors instead of generic abuse, or with the >powerful enumerates... Delphi rocks! And speed is awsome... Don't you also end up with double or triple the amount of code? And how readable/maintainable is the result?
![]() |
0 |
![]() |
Konstantine Poukhov wrote: > Mark. You should be intelligent enough > to know that it has nothing to do with native > but everything with moronic way it is coded. > > If the same person wrote an implementation > using Oxygen it'd suck just as bad. Chill. You did spot the wink, didn't you? -- SteveT
![]() |
0 |
![]() |
On 09/04/2013 22:59, Kim Madsen wrote: >> @Kim What exactly do you mean by disabling the windows cache? Do you >> mean the disk cache? I would have thought that would be an exceptionally >> bad idea. > > Yup.. the windows cache. It would be a bad idea, unless one makes something > better for the specific purpose. > In our case, the TkbmMWBufferedFileStream actually more than doubles > performance in situations, where semi random seeks are required. The reason > is that the windows cache doesnt know better than to cache much too much > data that is really not needed at the time, why the performance goes way > down. On SSD disks the performance improvement can be even better with out > approach. > > And as can be seen, even with sequential reading, where the windows cache is > supposed to be better, TkbmMWBufferedFileStream performs slightly better. > > What happens when you attempt to read the same file again? At that point it's not cached and you have to hit the disk again. If you let Windows cache it, the read would be satisfied from RAM.
![]() |
0 |
![]() |
On 09/04/2013 22:43, Dominique Willems wrote: > Leif Uneus wrote: >> "[Regression in XE3] The new implementations of TStream.ReadBuffer >> and TStream.WriteBuffer are extremely inefficient" > > Glad I didn't touch XE3 yet. Would have spent eons programming around > this. > Strangely I'd not noticed this issue at all. I guess because my performance critical code uses a buffered file stream of my own making, and I happen to call Write/Read rather than WriteBuffer/ReadBuffer because I want to do my own error handling. Lucky me.
![]() |
0 |
![]() |
David Heffernan wrote: > Strangely I'd not noticed this issue at all. I was more alluding to the TStream issues.
![]() |
0 |
![]() |
Konstantine, > Mark. You should be intelligent enough > to know that it has nothing to do with native > but everything with moronic way it is coded. > > If the same person wrote an implementation > using Oxygen it'd suck just as bad. my point exactly. "native" code can be crap. "managed" code can be fast.
![]() |
0 |
![]() |
On 09/04/2013 23:16, Dominique Willems wrote: > David Heffernan wrote: >> Strangely I'd not noticed this issue at all. > > I was more alluding to the TStream issues. > So was I. My codebase manages to completely bypass the issue.
![]() |
0 |
![]() |
> my point exactly. "native" code can be crap. "managed" code can be fast. And the conclusion is of course ...
![]() |
0 |
![]() |
David Heffernan wrote: > So was I. My codebase manages to completely bypass the issue. Could you rewrite the entire RTL and post it in .attachments, please?
![]() |
0 |
![]() |
Hi Markus, > Yes it looks like it is, so have you created a QC request about this > one? Otherwise nothing might trigger a revisitation of this particular > source code as it seemingly works from a purely functional stand point. You are absolutly right. I could enter a qc request. But why i have to? I have paid for the product. I have to pay for another update to get this crap fixed. Why wasnt the original implementor smart enough and designed it in a proper way. Nevertheless i have logged it. http://qc.embarcadero.com/wc/qcmain.aspx?d=114824. Soeren.
![]() |
0 |
![]() |
This is insane. Until now I belived that each next version (after D2006) is getting better and better. Now I am considering to stay at Delphi 2009 forever. (which might not last for very long cause I have enough of crap which EMB throws at our face with each new release...)
![]() |
0 |
![]() |
> What happens when you attempt to read the same file again? At that point > it's not cached and you have to hit the disk again. If you let Windows > cache it, the read would be satisfied from RAM. Thats where my cache comes into play. I replaced Windows cache with my own, that performs much better because I have better control over what to cache and how much. -- best regards Kim Madsen TeamC4D www.components4developers.com The best components for the best developers High availability, high performance loosely coupled n-tier products
![]() |
0 |
![]() |
> Yes. You can get it from https://github.com/zeMirco/sf-city-lots-json. > Simply download it as zip. Thanks! > Yes, there are many floats and small strings. I had an idea about how to handle those while still having a DOM, I'll see how it works out ;) Eric
![]() |
0 |
![]() |
> Yes. You can get it from https://github.com/zeMirco/sf-city-lots-json. > Simply download it as zip. Hmm, github is being its usual unstable self: I'm getting "server error" when trying to download as zip, "blob too large" when trying to download as raw, and git clone aborts after several minutes with a server error as well... Ah well, I'll try again later... Eric
![]() |
0 |
![]() |
> You are absolutly right. I could enter a qc request. But why i have to? > I have paid for the product. I have to pay for another update to get > this crap fixed. I bought XE3 without sa. We have had 2 updates, which hardly fixed any bugs in Delphi, now I have will probally have to buy XE4 to get the bug fixes. I am not very happy about this. EMB should give the owners of XE3 a special update price, because they have released XE4 so soon. I am not against progress(looking forward to Android) but EMB should look after there existing customers with bug fixes.
![]() |
0 |
![]() |
On 10/04/2013 00:11, Dominique Willems wrote: > David Heffernan wrote: >> So was I. My codebase manages to completely bypass the issue. > > Could you rewrite the entire RTL and post it in .attachments, please? > I'm not trying to say that this bug is not terrible. Or has significant impact. I left a comment to that effect at Pierre's QC report. It was just an aside that I had miraculously, through coincidence, managed to avoid being affected.
![]() |
0 |
![]() |
Konstantine, >> my point exactly. "native" code can be crap. "managed" code can be fast. > > And the conclusion is of course ... that obsession about code being compiled to "x86 instructions in my ..exe file" is pointless, because that is not the distinguishing factor that your app fast or slow. βmarc
![]() |
0 |
![]() |
On 10/04/2013 07:57, Kim Madsen wrote: >> What happens when you attempt to read the same file again? At that point >> it's not cached and you have to hit the disk again. If you let Windows >> cache it, the read would be satisfied from RAM. > > > Thats where my cache comes into play. I replaced Windows cache with my own, > that performs much better because I have better control over what to cache > and how much. > > That's a solution that can only work when all processes that are performance sensitive use your code.
![]() |
0 |
![]() |
That obsession about code being jitted is pointless, because thas is not the distinguishing factor that your app fast or slow. :)
![]() |
0 |
![]() |
Arnaud, > That obsession about code being jitted is pointless, because thas is > not the distinguishing factor that your app fast or slow. > :) yes, exactly.
![]() |
0 |
![]() |
David Heffernan wrote: > I'm not trying to say that this bug is not terrible. Or has > significant impact. I left a comment to that effect at Pierre's QC > report. It was just an aside that I had miraculously, through > coincidence, managed to avoid being affected. I was kidding, silly. :)
![]() |
0 |
![]() |
> {quote:title=Joseph Mitzen wrote:}{quote} > But the problem there is that development time is then greatly extended since you have to bypass everything and spend an untold amount of time optimizing. This is why there are read-to-use libraries like our little mORMot. You do not have to reinvent the wheel, since we already did it for you, and for free. And you benefit of a number of users using, therefore testing and enhancing it worldwide. OpenSource can help, when the main proprietary product has some weaknesses. > Don't you also end up with double or triple the amount of code? And how readable/maintainable is the result? Pretty readable, and easy to follow, with no more code to type than with .Net. For instance, you can write something like this: {code} aMale := TSQLBaby.CreateAndFillPrepare(Client, 'Name LIKE ? AND Sex = ?',['A%',ord(sMale)]); try while aMale.FillOne do DoSomethingWith(aMale); finally aMale.Free; end; {code} If you take a look at the TMS Aurelius sample code, you may find some very clear sample code, also. I do not see how the Linq syntax could be more readable, since it is a breaking change in the way of thinking: you are reading C#, then you have to switch your mind to the Linq orientation in order to understand what is going on... Playing with Resharper, in Visual Studio, is pretty instructive: sometimes, two simple "for each" loops are just more easy to understand than a Linq query. BTW, you may be able to code efficiently in Visual Studio without Resharper? :) Honestly, .Net is far from a KISS design. For instance, you can do whatever you need with WCF, with pretty good performance, but all the plumbing is a nightmare to configure. We have so much troubles because of the bindings in our .exe.config ! And you need to generate the client side wrappers code... You can bypass the difficulty by using code to fix the bindings, and use templates or pre/post-compilation engines. We are far away from KISS. This is why I prefer a design-by-convention process, as we tried to implement in mORMot, and you can find in RoR or other projects (including .Net projects of course!). What I really do not understand nor like, is why Embarcadero put so much efforts to "mimic" .Net. ..Netism in Pascal does not make sense! For instance, attributes are defined previously to the method/field, whereas the FreePascal way (i.e. put it *after* the method/field) does make better sense to me. Of course, this is something personal, but I can assure you that you can be as productive in Delphi than in .Net. It will depend on the programmer skills.
![]() |
0 |
![]() |
>> > That's a solution that can only work when all processes that are > performance sensitive use your code. Ehh... yes.. thats a given... but thats often how it is. Most times its a single process that handles a given resource. In a few cases its not so, but I would argue that those cases most often would benefit of having a frontend process that takes care of the actions towards that resource on behalf of other processes. -- best regards Kim Madsen TeamC4D www.components4developers.com The best components for the best developers High availability, high performance loosely coupled n-tier products
![]() |
0 |
![]() |
> {quote:title=Arnaud BOUCHEZ wrote:}{quote} > For instance, you can do whatever you need with WCF, with pretty good performance, but all the plumbing is a nightmare to configure. And, in this aspect, you can not do anything in WCF. I was a bit optimistic. For instance, try to implement a CORS behavior with a WCF self-hosted RESTful service. This is not possible, due to the black-box approach of the system: you can't add the needed entries to the HTTP headers. Whereas, when you have the whole source code, it is pretty easy to add such a feature when needed: in just a commit in your own library - http://synopse.info/fossil/info/9d97879a75
![]() |
0 |
![]() |
"Arnaud BOUCHEZ" wrote in message news:[email protected] >For instance, you can write something like this: >{code} >aMale := TSQLBaby.CreateAndFillPrepare(Client, > 'Name LIKE ? AND Sex = ?',['A%',ord(sMale)]); >try > while aMale.FillOne do > DoSomethingWith(aMale); >finally > aMale.Free; >end; >{code} > >If you take a look at the TMS Aurelius sample code, you may find some very >clear sample code, also. > >I do not see how the Linq syntax could be more readable I agree with you about readability and how useful Linq is or not. But one thing we could benefit in this case is having some property references. Currently in TMS Aurelius all queries are bound to property names, like: Results := Manager.Find<TAnimal> .Where(not TLinq.Like('Name', 'A%')) .List; if we have some kind of property reference in the compiler, it would be written this way: Results := Manager.Find<TAnimal> .Where(not TLinq.Like(TAnimal.Name, 'A%')) .List; and this would be much better to maintain, you get compile-time errors, and refactoring would (could) work. The best that can be done now is using TMS Data Modeler and creating the dictionary which would allow to use this: Results := Manager.Find<TAnimal> .Where(not TLinq.Like(Dic.Animal.Name, 'A%')) .List; but this is just an alias for a string, you might get compile-time errors, but refactor won't work either.
![]() |
0 |
![]() |
Am 10.04.2013 08:13, schrieb Soeren Muehlbauer: > Hi Markus, > >> Yes it looks like it is, so have you created a QC request about this >> one? Otherwise nothing might trigger a revisitation of this particular >> source code as it seemingly works from a purely functional stand point. > > You are absolutly right. I could enter a qc request. But why i have to? > I have paid for the product. I have to pay for another update to get > this crap fixed. Why wasnt the original implementor smart enough and > designed it in a proper way. In some ideal world you'd be right of course. But we've got to live with reality I guess, otherwise we'd only get constantly depressed... > > Nevertheless i have logged it. > http://qc.embarcadero.com/wc/qcmain.aspx?d=114824. Thanks. Added a comment. Markus
![]() |
0 |
![]() |