26

Re: When linear search faster hash map

Hello, netch80, you wrote: N> Hello, c-smile, you wrote: CS>> the case Is resulted when O (N) lookup covers O (1) as a bull a sheep. N> in one and a half time (3.44 against 2.13) it "as a bull a sheep"? N> "Well excuse..." (() Vovochka) N> Also we note, only at 9 records the linear search is already worse . And what further? CS>> And over O (log N) - generally continuous violation. N> already on pair tens percent. Well so, here kopeck, there kopeck... N> And if really fast search - that here is necessary as it to do - to look, for example, in Kamailio: N> N>#define _acce_ 0x65636361/* "acce" */N>#define _allo_ 0x6f6c6c61/* "allo" */N>#define _auth_ 0x68747561/* "auth" */N>#define _oriz_ 0x7a69726f/* "oriz" */N>#define _atio_ 0x6f697461/* "atio" */N>#define _call_ 0x6c6c6163/* "call" */N>#define __ id2_ 0x2064692d/* "-id" */N>#define __ id1_ 0x3a64692d/* "-id:" */N> [...] N>#define FIRST_QUATERNIONS \N> case _via1_: via1_CASE; \N> case _from_: from_CASE; \N> case _to12_: to12_CASE; \N> case _cseq_: cseq_CASE; \N> case _call_: call_CASE; \N> case _cont_: cont_CASE; \N> case _rout_: rout_CASE; \N> case _max __: max_CASE; \N> N> well and so on - the line is cut in the standard portions on 4 bytes, translated to the lower register (SIP titles - case-insensitive) and then the compiler task - to select the correct variant to spread out case (normally it builds something like a tree). Yes, it is very rigid target optimization. But there it is justified. Yes here too not the fact actually... 1) endianness 2) transfer in the lower register it O (n) and 3) switch on rarefied case values is all the same linear lookup. A bit better, but nevertheless it is necessary to look at specific allocation tokens probability. About which the compiler does not know anything and can generate bad lookup.

27

Re: When linear search faster hash map

Hello, rg45, you wrote: R> Hello, c-smile, you wrote: CS>> the case Is resulted when O (N) lookup covers O (1) as a bull a sheep. And over O (log N) - generally continuous violation. R> so after all anybody also did not promise that O (1) ALWAYS will be faster O (log N), and O (log N) ALWAYS faster O (N). Because all these estimations set only asymptotics of dependence of runtime from the size of input sequence, but are not exact dependence. They allow to say only that there is such size of input sequence over which O (1) starts to benefit at O (log N), and O (log N) at O (N). And here what will be this size, depends already from... In these formulas still there is a constant about which forget which and is all these , caches, predictions of branchings,  and other. At big n its influence is not considerable, therefore in formulas it and discard, and at small n it can just make solving impact on speed.

28

Re: When linear search faster hash map

29

Re: When linear search faster hash map

Hello, c-smile, you wrote: CS> do not walk children to Africa to walk use std:: map not to destination. And if you in the linear search reduced check to memcmp (), it would win all cases. At usage perfect hash in real life would be not  to make in the end check that we found that searched. On the given dial-up it, of course, does not give collisions but who told, what him with a warranty feed with one of  lines?

30

Re: When linear search faster hash map

Hello, c-smile, you wrote: CS> This here discussion the Author: c-smile Date: 05.10 07:47 induced  on article writing on a subject. CS> the case Is resulted when O (N) lookup covers O (1) as a bull a sheep. And over O (log N) - generally continuous violation. CS> do not walk children to Africa to walk use std:: map not to destination. And you can check up such variant? Idea in that to suppose all data in an array and it is serial, so that the cache well worked, to compare pieces of storage to the help memcmp. #include<memory.h> #include<string.h> #include<string> #include<iostream>//enum FormatterType {//simple_formatter, 0//integer_formatter, 1//decimal_formatter, 2//currency_formatter, 3//date_formatter, 4//date_local_formatter, 5//time_formatter, 6//time_local_formatter, 7//enum_formatter, 8//duration_formatter 9//}; int str2enum (std:: string const& s) {static int const ssize = strlen (""); static char m [] = """text" "integer" "decimal" "currency" "date" "date-local" "time" "time-local" "enum" "duration"; static int const msize = strlen (m) / ssize; if (s.size ()> ssize) return 0;//0-simple_formatter memset (m, ' ', ssize); memcpy (m, s.c_str (), s.size ()); for (int i = 1; i <msize; ++ i) {if (memcmp (m, m + i * ssize, ssize) == 0) return i - 1;} return 0;//0-simple_formatter} int main (int, char **) {std:: cout <<str2enum ("text") <<std:: endl; std:: cout <<str2enum ("date-local") <<std:: endl; std:: cout <<str2enum ("duration") <<std:: endl; std:: cout <<str2enum ("abc") <<std:: endl; std:: cout <<str2enum ("abcdefjhiklmnopst") <<std:: endl; return 0;}

31

Re: When linear search faster hash map

Hello, Pzz, you wrote: Pzz> Hello, c-smile, you wrote: CS>> do not walk children to Africa to walk use std:: map not to destination. Pzz> and if you in the linear search reduced check to memcmp (), he would win all cases. And how it helps? Pzz> at usage perfect hash in real life would be not  to make in the end check that we found that searched. On the given dial-up it, of course, does not give collisions but who told, what him with a warranty feed with one of  lines? Well so it there is://gperf approach +++ FormatterType gperf_t2e (std:: string name) {const struct enum_def * ed = enum_names:: find_def (name.c_str (), (unsigned int) name.size ()); return ed? FormatterType (ed-> value): simple_formatter;}//gperf approach - - Gperf returns nullptr if it is not found. And then the input flow contains tokens not from a dial-up.

32

Re: When linear search faster hash map

Hello, c-smile, you wrote: Pzz>> And if you in the linear search reduced check to memcmp (), he would win all cases. CS> and how it helps? Works faster, than to compare a line in a cycle byte by byte.

33

Re: When linear search faster hash map

34

Re: When linear search faster hash map

Hello, c-smile, you wrote: CS> This here discussion the Author: c-smile Date: 05.10 07:47 induced  on article writing on a subject. CS> the case Is resulted when O (N) lookup covers O (1) as a bull a sheep. And over O (log N) - generally continuous violation. CS> do not walk children to Africa to walk use std:: map not to destination. , I like as clearly explained all to you in it http://rsdn.org/forum/cpp/6934121.1 the Author: alex_public Date: 15.10 04:58 that subject. Not clearly what for you got the new. Unless it would be desirable to talk about it, but there was nothing to answer that message?)))

35

Re: When linear search faster hash map

Hello, Pzz, you wrote: Pzz> Hello, c-smile, you wrote: Pzz>>> And if you in the linear search reduced check to memcmp (), he would win all cases. CS>> and how it helps? Pzz> works faster, than to compare a line in a cycle byte by byte. No, not faster: It here 2.82 seconds bool operator == (const slice& r) const {if (length! = r.length) return false; for (unsigned int i = 0; i <length; ++ i) if (start [i]! = r.start [i]) return false; return true;} And it bool operator == (const slice& r) const {if (length! = r.length) return false; return memcmp (start, r.start, length) == 0;} 3.06 seconds. Expenses for function invocation and search preparation in memcmp. memcmp benefits on the big sequences, but not in this case.

36

Re: When linear search faster hash map

Hello, Engler, you wrote: E> Hello, Engler, you wrote: E> As a result we have well optimized function if_t2e, What there is optimized? Banal if-elif-elif-else... I.e. serial (linear) search.

37

Re: When linear search faster hash map

Hello, MTD, you wrote: MTD> Speech not only about small n.  you. It would be desirable to look, as the linear search bypasses the best asymptotically algorithms at n though in one million. And it is better in billion. MTD> for an estimation of speed of algorithms it is offered to use it  is not always true. For example, at a binary tree and the arranged array the search asymptotics is identical - a logarithm. Whether it means, what they are equally effective? No, after all the tree can be spread on storage because of it there will be constant misses in  and the arranged array appears in times more effectively. It is considered at the Whip? No, here about that and speech.  misses and so on matter only for small n. But even without registrations of architecture of the modern processors, algorithms in a forehead more often easier and as a result for 1 pass the smaller amount of commands is fulfilled. Accordingly yes, for small n the simple algorithm can be faster and even most likely will be. But with growth n shit always turns out. The registration of architecture of the processor and to that similar on big n you a maximum will lift productivity in 10 times. All right, even in 100. All right, even in one thousand though it also is improbable. Here only asymptotics change is an acceleration can be in millions and billions times. The main thing that sufficed storage that it n to contain.

38

Re: When linear search faster hash map

Hello, elmal, you wrote: E> Uh you. To read written by me did not try? Try, then point a finger, where I there the linear search named panacea. E> Kesh misses and so on matter only for small n. Yes that you speak. Here you have 100 terabyte of the data is small n? Now present that you have an operative storage () and disk array. Still it is not obvious you, what the amount of misses should be minimized? E> here only asymptotics change is an acceleration can be in millions and billions times. I somewhere denied it? Point a finger.

39

Re: When linear search faster hash map

40

Re: When linear search faster hash map

Hello, MTD, you wrote: MTD> the Hat from a foil rescues from voices in a head and sincere health recovers. The hat from a foil actually reflects internal voices reversely in a head. Because of what process goes in cycles and from it there is no output. Overheating in a word comes.

41

Re: When linear search faster hash map

42

Re: When linear search faster hash map

Hello, alex_public, you wrote: _> Hm, I like as clearly explained all to you in it http://rsdn.org/forum/cpp/6934121.1 the Author: alex_public Date: 15.10 04:58 message of that subject. Not clearly what for you got the new. Unless it would be desirable to talk about it, but there was nothing to answer that message?))) it is possible a question, std:: string_view, the dictionary order, http://en.cppreference.com/w/cpp/string … erator_cmp just uses. Or I miss something? What method it is possible to compare even lines, if not dictionary order ( how to perform operation <even easier)?

43

Re: When linear search faster hash map

Hello, Engler, you wrote: _>> Hm, I like as clearly explained all to you in it http://rsdn.org/forum/cpp/6934121.1 the Author: alex_public Date: 15.10 04:58 message of that subject. Not clearly what for you got the new. Unless it would be desirable to talk about it, but there was nothing to answer that message?) )) E> It is possible a question, std:: string_view, just uses dictionary order, http://en.cppreference.com/w/cpp/string … rator_cmp. Or I miss something? And at std:: string and at std:: string_view the comparison operator is lexicographic about what I actually and wrote to the message under the link above. However std:: map it is not obliged absolutely not it to use is only value by default for 3rd parameter of a template map which in case of adherence to principles of high-speed performance is naturally installed the. E> what method it is possible to compare even lines, if not dictionary order ( how to perform operation <even easier)? Yes somehow, if only worked quickly and allowed to implement a certain sorting. For example under the link above there is a working example of the similar operator with which std:: map works faster the linear search even on small listings.