ObiwanJebroni

07-22-2003, 10:08 PM

Okay, I'm back again with this Sorting business. I was wondering if anyone knew what the fastest sorting algorithm is. I'm currently using a quicksort which has a O(nlogn) efficiency. I was wondering if there were any faster. Thanks.

Jason

07-23-2003, 01:26 AM

quick sort is the fastest of the more conventional ways to sort, though it is possible to write a sorting algorithm that sorts faster, but like I said it is the fastest of the ways that everyone uses...

Jason

Originally posted by Jason

quick sort is the fastest of the more conventional ways to sort, though it is possible to write a sorting algorithm that sorts faster, but like I said it is the fastest of the ways that everyone uses...

Jason

No, not proportionally. You can prove the most efficient comparison-based sorting algorithms are O(n log n). Non-comparison based sorts, such as Proxmap or various bucket sorts, can operate at O(n).

ObiwanJebroni

07-23-2003, 04:35 PM

Proxmap and bucket sorts? Can you put some light to these terms? Thanks in advance.

Jason

07-23-2003, 07:37 PM

jkd:

No, not proportionally. You can prove the most efficient comparison-based sorting algorithms are O(n log n). Non-comparison based sorts, such as Proxmap or various bucket sorts, can operate at O(n).

but ObiwanJebroni is doing a comparison based search which is why I was suggesting that quicksort is the fastest that he could possibly get unless he writes his own which could prove to be a touch faster. There is data here that is being sorted and so comparison sorting algorithms should be used.

Jason

ObiwanJebroni

07-24-2003, 02:39 PM

Still no consensus on Shell Sort, eh? :p

You can try various O(n log n) sorts. It depends on the nature of the data sorted. When using Big-Oh notation, you drop any constant coefficients and insignificant terms. For example, some may be n lg n and others may be n ln n, but using change of base, we still get n log n for Big-Oh. Or in the case of convex hull algorithms, it could even be (n lg n) + O(n). (Sorting the data points, then O(n) to find the convex hull.)

You can do one of two things:

1. Take the popular O(n log n) sorts and actually prove the number operations they require precisely.

2. Run them through your expected data repeatedly, and see which one performs slightly faster.

Anyway, Proxmap is a O(n) sort, though only in most cases. It can be O(n^2) in worst case though I believe, and it uses n^2 in memory. (or maybe 2n... it's been a while).

Bucketsorts are for sorting data without needing comparisons. Say you were sorting strings. You could put every string starting with "a" in a bucket, every string starting "b" in another, etc. All the way up through "z". Then you take them out of the buckets in order, and repeat for the second character. Once you go through all the letters, and you continued taking them out in order, you should end up with the sorted data.

stoodder

07-26-2003, 03:33 AM

hmmm really depends on what ur trying to sorta.. but i would say you could use ann array string and a nested for loop maybe?

premshree

08-01-2003, 09:39 PM

http://www.cs.bell-labs.com/cm/cs/pearls/code.html

http://www.cs.bell-labs.com/cm/cs/pearls/sortanim.html

BrainJar

08-28-2003, 05:31 PM

Depending on the implementation, quicksort has a possible worst case of O(n^2). It all depends on how you define the pivot value (which determines how a given list gets broken down into two sub lists).

If you choose a bad pivot, you can end up with only one item in one sub list and the remaining items in the other. That leads to the O(n^2) run time. To prevent that, you need to choose a pivot value that splits the list into equal sized sub lists as closely as possible.

That's what they call a balanced quicksort. A merge sort is similar and also gives a worst case of O(n log n).

That's all theory of course. In practice you can often improve sorting by combining methods. The overhead of a quicksort can make it run slower than other sorting algorithms for shorter lists. So you might speed up your quicksort by switching to something like a shell sort or insertion sort once a sub list gets below a certain length.

Other factors like memory requirements and I/O (if you're sorting files or database records) can affect the actual times as well. A lot depends on the data and where it's stored. So, like jdk said, you'd have to experiment in order to optimize for a particular situation.