Delivered-To: luke@ndatech.com X-Sender: rugeley@pop3.demon.co.uk X-Mailer: QUALCOMM Windows Eudora Version 5.1 Date: Sat, 15 Dec 2001 22:45:08 +0000 To: Luke Welsh From: mersenne-digest-invalid-reply-address@base.com (Mersenne Digest) (by way of Gordon Spence ) Subject: Mersenne Digest V1 #915 Mersenne Digest Wednesday, December 5 2001 Volume 01 : Number 915 ---------------------------------------------------------------------- Date: Wed, 05 Dec 2001 13:10:14 -0500 From: Nathan Russell Subject: Mersenne: Correction regarding 'p-1 records' thread Several list members have been kind enough to point out to me that 2^n is the smallest n+1 bit number - not the smallest n bit number - in the saeme way that 10^1 is the smallest 2-digit number. Nathan _________________________________________________________________________ Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers ------------------------------ Date: Wed, 5 Dec 2001 19:20:48 -0000 From: bjb@bbhvig.uklinux.net Subject: Re: Mersenne: New exponents On 4 Dec 2001, at 17:59, George Woltman wrote: > >Case 1: I finish first, find a prime and announce my discovery. I did > >the work but the exponent is assigned to you! Who gets the > >credit??? > > You, get the credit. User b will be mighty disheartened. I know first hand. > Slowinski's Cray beat my own Pentium-90 by just a few days in the discovery > of M#34. Ooof. I didn't know about that. > > As to legal issues, the disclaimer section of the download page states: > > We are not responsible for lost prize money, fame, credit, etc. should > someone accidentally or maliciously test the number you are working on and > find it to be prime. The case I describe might not fall under the legal definition of "accidental" or "malicious". It would be possible for one or other of the parties to argue that they have been assigned the work by PrimeNet and that PrimeNet was therefore liable for lost prize money. On the "belt and braces" principle I think that the wording should be reinforced. As to people working independently - I don't see how you can cover that one, since the independent party will not be covered by the disclaimer if they never downloaded any variant of Prime95. In that case it _is_ accidental that PrimeNet should assign work which is being replicated elsewhere without its knowledge. Nowadays it's at least a reasonable assumption that people with interests in this field _would_ obtain assignments through PrimeNet. In the early days it would be much more likely that people were replicating each other's work without knowing it. Presumably such a situation is what led to your disappointment. Regards Brian Beesley _________________________________________________________________________ Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers ------------------------------ Date: Wed, 5 Dec 2001 19:20:48 -0000 From: bjb@bbhvig.uklinux.net Subject: Re: Mersenne: Re: Mersenne Digest V1 #913 On 4 Dec 2001, at 20:36, Gordon Spence wrote: > >I've triple-checked thousands of small exponents - some of the > >ones where the accepted residual was recorded to only 16 bits or > >less, which makes the chance of an undetected error _much_ > >greater (though still quite small) - so far no substantive errors in the > >database have come to light. A very few (think fingers of one hand) > >instances of incorrectly matched residuals have come to light - > >completing the double-check in these cases proved that one of the > >recorded residuals was correct. > > Currently my team report cleared list shows 338 double checks and 12 double > checked factored including this monster I'm not talking about missed factors. The database shows that all the small exponents ( < 1 million) have been factored at least a bit or two deeper than the "calculated optimum", so I haven't even been trying. I've found quite a few factors of these small exponents by running P-1, that's a different story. My point here is that if we have database entries with at most one 64-bit residual (so that the matching residuals depend on only the bottom 16 bits), so far when I've run a triple-check LL test the bottom 16 bits have always matched; indeed when there is already one 64-bit residual to compare with, my triple-check has always matched that. The work I'm doing in this area is a marginally useful way of using systems which are rather too limited to do other work. > > 6630223 87 DF 195139088771490335223859559 07-Apr-01 07:58 trilog > > (In fact when it was checked in PrimeNet initially rejected it because it > was longer than this sort of check was supposed to find! Has anyone found a > factor bigger than 87 bits using Prime95?) Interesting, though George's answer explains this. I wrote the factor validation code running on the server; I specifically wrote it to be able to handle factors of any size (subject to system memory at approx 8 bytes per decimal digit of the factor), with a rider that run times for extremely large factors (>> 50 digits) might be problematic, given that the server cannot afford to spend very long running each validation. My record factor found using Prime95 is [Fri Sep 07 21:33:06 2001] ECM found a factor in curve #199, stage #2 Sigma=6849154397631118, B1=3000000, B2=300000000. UID: beejaybee/Simon2, P1136 has a factor: 9168689293924594269435012699390053650369 I've actually found a couple of larger factors using P-1, but these don't count as the factors found were composite. This can happen because P-1 depends on finding the GCD of a calculated value and the number being factored. (So does ECM.) If you're (un?)lucky you can find two factors at once, in which case the result of the GCD is their product. This is what ECM uses the lowm & lowp files for - ECM is often used to try to find more factors of a number with some factors already known; when you get a GCD > 1 you have to divide out any previously known factors to find if you've discovered a new one. > > Of course some of these may be because the original check went to a lower > bit depth than the version of Prime95 that I used. Naturally. > I know from doing "deep > factoring" in the 60m range that one more bit of factoring can find a "lot" > of extra factors... Over ranges of reasonable size, the number of factors you find between 2kp+1 and 2Kp+1 should be independent of p, i.e. the expected distribution of smallest factors is logarithmic. For factors of a particular absolute size, larger exponents make finding factors easier. The effort involved is (ignoring complications resulting from computer word length, which are certainly not insignificant!) dependent on the range of k, not the range of 2kp+1. > So if we say that as a ballpark figure half of these are > due to an increase in factoring depth, then the error rate from this > admittedly small sample is 1.78% or in other words of the current 137,924 > exponents less than 20m with only a single LL test we can expect to find > just under 2500 exponents with an incorrect result. This is an interesting finding - roughly in line with other estimates of raw error rates - but I'm not sure I entirely understand the logic. I simply don't see how "missed factor" errors during trial factoring are related to incorrect residuals resulting from LL testing - except that the LL test wouldn't have been run if the factor wasn't missed. Regards Brian Beesley _________________________________________________________________________ Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers ------------------------------ Date: Wed, 05 Dec 2001 15:34:34 -0500 From: Jud McCranie Subject: Mersenne: HD crash My hard drive crashed, and I have almost certainly lost all of the GIMPS data for the exponent I was working on and 4 more I had in the queue. The initial trial factorization had been done on all of them and the first one was just about 4 days from completion. What should I do about these lost exponents? (I don't know which ones they were) +-----------------------------------------------------------------+ | Jud McCranie | | | |"Thought I saw angels, but I could have been wrong." Ian Anderson| +-----------------------------------------------------------------+ _________________________________________________________________________ Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers ------------------------------ Date: Wed, 5 Dec 2001 13:40:22 -0800 From: "Aaron Blosser" Subject: RE: Mersenne: HD crash Best idea is to look at your account status page and find the exponents for that machine on there. I made the mistake of rebuilding my laptop the other day and while I had backed up everything else, I forgot to backup the directory with ntprime. Argh... fortunately it wasn't too far along on the current exponent. Just rebuilt the worktodo.ini file with the appropriate test=xxxx,xx lines (optionally add the ,1 on the end if it had already completed the p-1 factoring) and let 'er rip. At least on my machines at home I try to be better about at least making weekly backups of the prime directories on there. For my machines at work, they don't suffer the same "wipe and rebuild" fate as many of my home test machines, so I don't bother backing them up much, which of course is why I lost the stuff on my work laptop. :) Aaron > -----Original Message----- > From: mersenne-invalid-reply-address@base.com [mailto:mersenne-invalid- > reply-address@base.com] On Behalf Of Jud McCranie > Sent: Wednesday, December 05, 2001 12:35 PM > To: mersenne@base.com > Subject: Mersenne: HD crash > > My hard drive crashed, and I have almost certainly lost all of the GIMPS > data for the exponent I was working on and 4 more I had in the queue. The > initial trial factorization had been done on all of them and the first one > was just about 4 days from completion. What should I do about these lost > exponents? (I don't know which ones they were) _________________________________________________________________________ Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers ------------------------------ Date: Wed, 05 Dec 2001 23:21:06 +0100 From: Alexander Kruppa Subject: Re: Mersenne: New exponents bjb@bbhvig.uklinux.net wrote: > > On 4 Dec 2001, at 17:59, George Woltman wrote: > > > >Case 1: I finish first, find a prime and announce my discovery. I did > > >the work but the exponent is assigned to you! Who gets the > > >credit??? > > > > You, get the credit. User b will be mighty disheartened. I know first hand. > > Slowinski's Cray beat my own Pentium-90 by just a few days in the discovery > > of M#34. > > Ooof. I didn't know about that. I read about this on the mailing list archives. The info was scattered over several posts, hopefully I recall everything correctly, and that all the story bits belong to the same Mersenne prime! I hope George will correct me if I'm wrong. David Slowinski contacted George, asking him wether Prime95 could test numbers >1 million bits. He had just discovered that M1257787 was prime - - when George's own computer was only a few days from finishing that very exponent! David also asked for an independet verification of the prime, so suddenly the LL run on George's computer that had almost discovered GIMPS' first prime was no more than a double check for David's success. To make matters worse, Slowinski delayed the announcement of the prime as he "was out of town" for a while - which turned out to be the better part of half a year. I think I would have flipped. Living for half a year with such a freak incident on your mind and not even being able to tell anyone. OTOH, it's an impressive example of how to keep a secret. Alex _________________________________________________________________________ Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers ------------------------------ Date: Wed, 5 Dec 2001 14:40:43 -0800 (PST) From: Mary Conner Subject: Re: Mersenne: New exponents On Wed, 5 Dec 2001, Alexander Kruppa wrote: > David Slowinski contacted George, asking him wether Prime95 could test > numbers >1 million bits. He had just discovered that M1257787 was prime > - when George's own computer was only a few days from finishing that > very exponent! David also asked for an independet verification of the > prime, so suddenly the LL run on George's computer that had almost > discovered GIMPS' first prime was no more than a double check for > David's success. > > To make matters worse, Slowinski delayed the announcement of the prime > as he "was out of town" for a while - which turned out to be the better > part of half a year. I think I would have flipped. Living for half a > year with such a freak incident on your mind and not even being able to > tell anyone. > OTOH, it's an impressive example of how to keep a secret. Ooof, so if Slowinski had gone out of town without contacting George, or had contacted someone else for independant verification, George's computer would have found the prime, and could have announced the discovery before Slowinski returned, and boy, wouldn't that have been a huge snarl over who got credit. Not to mention that someone other than George or Slowinski could have found it too. On a purely technical note, I've just been assigned an exponent that expired from someone else. I looked at the active exponent report from just before it was expired, and it looks like the exponent was about one third done before it expired. In the event that the other person does eventually check back in, is there a mechanism in place to either tell his machine or mine that it should abandon the exponent (it's a double check LL assignment, so two tests don't need to be run), or would we just end up both continuing on the same exponent, ending with it triple checked? Given how long this person had the exponent compared to the progress on it, it is likely that I would finish long before him anyway. _________________________________________________________________________ Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers ------------------------------ Date: Wed, 5 Dec 2001 22:55:04 -0000 From: bjb@bbhvig.uklinux.net Subject: Re: Mersenne: Re: Factoring benefit/cost ratio 0100,0100,0100On 5 Dec 2001, at 6:09, ribwoods@execpc.com wrote: 7F00,0000,0000> Brian Beesley wrote: > > On 3 Dec 2001, at 20:38, ribwoods@execpc.com wrote: > [... snip ...] > > > I think our record shows that a verified factor is still > > > slightly (by a minute but nonzero margin) more reliable an > > > indicator of compositeness than two matching nonzero LL > > > residues. > > > > AFAIK our record does _not_ show any such thing. > > Oh? It doesn't? There is no evidence of any verified residuals being incorrect. Neither is there any evidence that any verified factors are incorrect. Whatever theory states, the experimental evidence is that verified factors are no more (or less) reliable than verified LL tests. Suppose a taxi firm runs 10 Fords and 10 Hondas for a year. None of them break down. On that basis alone, there is no evidence whatsoever that one make is more reliable than the other. Naturally, other companies' experimental evidence may vary. > > [ big snip ] 7F00,0000,0000> There is a small chance that we may accept an incorrect factor even > after double-checking it, but that chance is even smaller than the > small chance that we may accept an incorrect double-checked L-L > residual. I doubt very much that we would accept an incorrect factor. The double-checking is done with completely different code. Besides which, checking a factor takes a few microseconds, whereas checking a LL test likely takes a few hundred hours. If anything goes wrong during a factoring run, we would be far more likely to miss a factor which we should have found rather than vice versa. This is relatively unimportant from the point of view of finding Mersenne primes; the effect is a small loss of efficiency. 7F00,0000,0000> How does that compare to the observed rate of incorrect factors > discovered after triple-checking _them_? AFAIK no-one bothers to triple-check factors. Nowadays factors are verified on the server at the time they're reported. I'm not privy to the server logs, so I simply don't know how many get rejected (except for the server _database_ problem reported recently - not related to the actual factor checking code - which causes problems with genuine factors > 32 digits in length). However, I can think of at least one way of pushing up the rejection rate. 7F00,0000,0000> > How many of those problems caused errors during L-L verifications, > and how many caused errors during factor verifications? > All during LL first tests, or QA runs (which are LL & DC done in parallel with intermediate crosscheck points). 7F00,0000,0000> However, you may not have spent anywhere near as much time doing > factor verifications as you have doing L-L verifications, so it may > not be valid to draw any conclusion about comparative error rates on > your system. I've spent no time at all verifying factors - it would take less than a minute to verify everything in the factors database. The total factoring effort I've put in (ignoring ECM & P-1 on small exponents) is only about 3% of my total contribution, so I would expect not to have had any factoring errors. Besides which, trial factoring _should_ have a lower error rate than LL testing, due to the lower load on the FPU (which is usually the CPU element most sensite to excess heat) and the smaller memory footprint (less chance of data getting clobbered by rogue software or random bit-flips). Regards Brian Beesley _________________________________________________________________________ Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers ------------------------------ Date: Wed, 05 Dec 2001 20:40:53 -0500 From: George Woltman Subject: Re: Mersenne: New exponents At 02:40 PM 12/5/2001 -0800, Mary Conner wrote: > > To make matters worse, Slowinski delayed the announcement of the prime > > as he "was out of town" for a while - which turned out to be the better > > part of half a year. I think I would have flipped. Living for half a > > year with such a freak incident on your mind and not even being able to > > tell anyone. > > OTOH, it's an impressive example of how to keep a secret. Keeping the secret was easy. I had promised David. >Ooof, so if Slowinski had gone out of town without contacting George, or >had contacted someone else for independant verification, George's computer >would have found the prime, and could have announced the discovery before >Slowinski returned, and boy, wouldn't that have been a huge snarl over who >got credit. Probably not. One of the reasons we tell past Mersenne discoverers immediately is to "stake our claim". I know David contacted Richard Crandall immediately and he probably contacted others. If had not contacted me directly, when my computer found M#34 and I then relayed the exciting news to Dr. Crandall, he would have given me the bad news. Slightly off topic, someone at the time (a non-GIMPSer) said no way I just missed out on M#34. The odds must be one in a million. So I calculated that a) given there is a Mersenne prime in the 1.2 millions, and b) given that a Cray found it, and c) given that there were about 40 GIMPS Pentiums working in the 1.2 million area at the time, and d) given the time it takes the average Pentium to do an LL test (I've since forgotten that value), then What is the chance that the two competing groups would both find the Mersenne prime within a week of each other? The answer turned out to be 3%. Not likely, but a far cry from 1 in a million. >On a purely technical note, In the event that the other person does >eventually check back in, is there a mechanism in place to either tell his >machine or mine that it should abandon the exponent No. The server never contacts the client. That's too much of a security risk in my book. _________________________________________________________________________ Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers ------------------------------ Date: Wed, 05 Dec 2001 20:45:05 -0500 From: George Woltman Subject: Re: Mersenne: New exponents At 02:40 PM 12/5/2001 -0800, Mary Conner wrote: > > David Slowinski discovered that M1257787 was prime > > - when George's own computer was only a few days from finishing that > > very exponent! One other "what could have been" note. I owned two computers at the time. The P-90 was testing 1257xxx and the PPro-200 was testing 1258xxx. If only I'd assigned the ranges the other way around.... :) _________________________________________________________________________ Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers ------------------------------ Date: Wed, 5 Dec 2001 19:10:24 -0600 (CST) From: ribwoods@execpc.com Subject: Re: Mersenne: Re: Factoring benefit/cost ratio Brian, I'm wondering whether we may be misunderstanding each other's contentions here. I thought you object to at least some of what I claimed, but now it seems that you're presenting arguments and evidence that support what I'm claiming. Since my previous postings may have had careless wording or otherwise obscured my intentions, and I did not earlier realize the importance of certain details to the discussion, let me restate what I've meant to claim: 1. It is more valuable to know a specific factor of a Mnumber than to know that that Mnumber is composite without knowing any specific factor. (There's little dispute about #1.) 2. Claim #1 is true not only from the viewpoint of mathematics in general, but also from the narrower viewpoint of the GIMPS search for Mersenne primes. 3. One (but not the only) justification for claim #2 is that, _in current practice_, a composite status derived by GIMPS from finding a specific factor is (slightly) more reliable than a composite status derived by GIMPS from matching nonzero residues from Lucas-Lehmer tests. That is, although in theory, or ideally, those two methods of determining compositeness are equally reliable, there currently exists a slight difference in reliability, in favor of the factor, from a practical standpoint. 4. Our experience ("the record"), as documented in the Mersenne mailing list or GIMPS history, supports claim #3. - - - - - - Brian Beesley wrote: >> > AFAIK our record does _not_ show any such thing. >> >> Oh? It doesn't? > > There is no evidence of any verified residuals being incorrect. Wait a second -- just yesterday you wrote that you had "triple-checked thousands of small exponents" (which means they had already been double-checked) and that "A very few (think fingers of one hand) instances of incorrectly matched residuals have come to light - completing the double-check in these cases proved that one of the recorded residuals was correct". So it seems that the meaning you're assigning to "verified" is something like "retested and retested until two residuals match". Is that a correct interpretation? If not, what is? My claim #3 means that in practice, factors require fewer verification runs to produce matching results than do L-L residues, on average. Do you disagree with that? If not, then don't we agree about claim #3? Furthermore, my claim #4 means that the demonstration that factors require fewer verification runs to produce matching results than do L-L residues, on average, rests on the observed history _including the paragraph you wrote from which I just quoted above!_ Do you disagree? Also, in that same paragraph you wrote, "... - some of the ones where the accepted residual was recorded to only 16 bits or less, which makes the chance of an undetected error _much_ greater (though still quite small) ..." Am I correct in interpreting this to mean that you think that using 64-bit residuals is more reliable than using 16-bit residuals? If so, then surely you'll grant that 256-bit residuals would be even more reliable yet, meaning that there's still room for error in our practice of using 64-bit residuals. But a specific factor is a _complete value_, not some truncation, and so its reliability is not damaged by the incompleteness which you admit keeps the L-L residues from being totally reliable - right? Then you wrote "so far no substantive errors in the database have come to light", but seemingly contradicted that in the very next sentence, "A very few (think fingers of one hand) instances of incorrectly matched residuals have come to light - completing the double-check in these cases proved that one of the recorded residuals was correct." ... And thus _the other_ recorded residual was _incorrect_. > Neither is there any evidence that any verified factors are incorrect. Depends on the meaning of "verified", of course. :-) Will Edgington (I think) has reported finding errors in his factor data base ... even though he verifies factors before adding them. Mistakes happen. But I think the error rate for factors has been significantly lower than for L-L residuals. > Whatever theory states, the experimental evidence is that verified > factors are no more (or less) reliable than verified LL tests. Then why don't we triple-check factors as often as we triple-check L-L results? Oh, wait ... depends on the meaning of "verified", again. > Suppose a taxi firm runs 10 Fords and 10 Hondas for a year. [ snip ] Let's have an example in which the number and nature of the units is closer to the gigabytes of data items we're slinging around. > Besides which, checking a factor takes a few microseconds, whereas > checking a LL test likely takes a few hundred hours. ... which tends to support my claim #3. > If anything goes wrong during a factoring run, we would be far more > likely to miss a factor which we should have found rather than vice > versa. ... which is in agreement with claim #3. >> How does that compare to the observed rate of incorrect factors >> discovered after triple-checking _them_? > > AFAIK no-one bothers to triple-check factors. ... because we know they're more reliable than L-L results (#3 again), based on our actual experience (claim #4) with them. So we seem to be in agreement about my claims #1-#4. I hypothesize that your previously-expressed demurrers were related to unclear wording on my part. Okay? Regards, Richard B. Woods _________________________________________________________________________ Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers ------------------------------ Date: Wed, 05 Dec 2001 21:23:27 -0500 From: Nathan Russell Subject: Re: Mersenne: New exponents At 08:40 PM 12/5/2001 -0500, George Woltman wrote in reply to Mary Conner: >>On a purely technical note, In the event that the other person does >>eventually check back in, is there a mechanism in place to either tell his >>machine or mine that it should abandon the exponent > >No. The server never contacts the client. That's too much of a security >risk in my book. However, when the client does contact the server (every 28 days by default, IIRC), will it not get an "this assignment does not belong to us"? I know that I had that happen while I had QA and primenet work queued on the same machine, and in fact it happened often enough to be rather annoying. Nathan _________________________________________________________________________ Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers ------------------------------ Date: Wed, 5 Dec 2001 19:15:39 -0800 (PST) From: Mary Conner Subject: Re: Mersenne: New exponents On Wed, 5 Dec 2001, George Woltman wrote: > >On a purely technical note, In the event that the other person does > >eventually check back in, is there a mechanism in place to either tell his > >machine or mine that it should abandon the exponent > > No. The server never contacts the client. That's too much of a security > risk in my book. That isn't exactly what I meant. Given that his exponent has been expired and assigned to me, if he then checks in later to report further progress on the exponent, will the server tell his client that the exponent has been expired and assigned to someone else, or will it tell me the next time I check in that he is still working on it? Or do both of our clients continue happily chugging away on it? _________________________________________________________________________ Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers ------------------------------ Date: Wed, 05 Dec 2001 22:33:46 -0500 From: George Woltman Subject: Re: Mersenne: New exponents At 07:15 PM 12/5/2001 -0800, Mary Conner wrote: > > No. The server never contacts the client. That's too much of a security > > risk in my book. > >That isn't exactly what I meant. Given that his exponent has been expired >and assigned to me, if he then checks in later to report further progress >on the exponent, will the server tell his client that the exponent has >been expired and assigned to someone else, or will it tell me the next >time I check in that he is still working on it? If he checks in his result, primenet will return error 11 - but your computation will continue. >Or do both of our clients continue happily chugging away on it? I think prime95 removes it from worktodo.ini only if you have not started the LL test. Obviously there is some room for improvement here. The current scheme works OK for first time checks since yours would then become a valid double-check. _________________________________________________________________________ Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers ------------------------------ Date: Wed, 5 Dec 2001 22:51:01 -0600 From: "Steve Harris" Subject: Re: Mersenne: Re: Factoring benefit/cost ratio Richard, Your first interpretation of "verified" residues is correct, they are retested until two residues match. Any time a double-check reports in a residue which is different from the first LL test, the exponent is returned to the database to be tested again. This means that at least one of the residues is incorrect, and happens (relatively) often, I believe about two percent of the time. However, as has been pointed out before, the odds of two LL tests on different machines returning the _same_ incorrect residues are astronomical (although, of course, still non-zero). Steve - -----Original Message----- From: ribwoods@execpc.com To: bjb@bbhvig.uklinux.net Cc: mersenne@base.com Date: Wednesday, December 05, 2001 8:34 PM Subject: Re: Mersenne: Re: Factoring benefit/cost ratio >Brian, > >I'm wondering whether we may be misunderstanding each other's >contentions here. I thought you object to at least some of what I >claimed, but now it seems that you're presenting arguments and >evidence that support what I'm claiming. > >Since my previous postings may have had careless wording or otherwise >obscured my intentions, and I did not earlier realize the importance >of certain details to the discussion, let me restate what I've meant >to claim: > >1. It is more valuable to know a specific factor of a Mnumber than to >know that that Mnumber is composite without knowing any specific >factor. > >(There's little dispute about #1.) > >2. Claim #1 is true not only from the viewpoint of mathematics in >general, but also from the narrower viewpoint of the GIMPS search for >Mersenne primes. > >3. One (but not the only) justification for claim #2 is that, _in >current practice_, a composite status derived by GIMPS from finding a >specific factor is (slightly) more reliable than a composite status >derived by GIMPS from matching nonzero residues from Lucas-Lehmer >tests. > >That is, although in theory, or ideally, those two methods of >determining compositeness are equally reliable, there currently exists >a slight difference in reliability, in favor of the factor, from a >practical standpoint. > >4. Our experience ("the record"), as documented in the Mersenne >mailing list or GIMPS history, supports claim #3. > >- - - - - > >Brian Beesley wrote: >>> > AFAIK our record does _not_ show any such thing. >>> >>> Oh? It doesn't? >> >> There is no evidence of any verified residuals being incorrect. > >Wait a second -- just yesterday you wrote that you had "triple-checked >thousands of small exponents" (which means they had already been >double-checked) and that "A very few (think fingers of one hand) >instances of incorrectly matched residuals have come to light - >completing the double-check in these cases proved that one of the >recorded residuals was correct". > >So it seems that the meaning you're assigning to "verified" is >something like "retested and retested until two residuals match". >Is that a correct interpretation? If not, what is? > >My claim #3 means that in practice, factors require fewer verification >runs to produce matching results than do L-L residues, on average. >Do you disagree with that? If not, then don't we agree about claim >#3? > >Furthermore, my claim #4 means that the demonstration that factors >require fewer verification runs to produce matching results than do >L-L residues, on average, rests on the observed history _including the >paragraph you wrote from which I just quoted above!_ Do you disagree? > >Also, in that same paragraph you wrote, "... - some of the ones where >the accepted residual was recorded to only 16 bits or less, which >makes the chance of an undetected error _much_ greater (though still >quite small) ..." Am I correct in interpreting this to mean that you >think that using 64-bit residuals is more reliable than using 16-bit >residuals? If so, then surely you'll grant that 256-bit residuals >would be even more reliable yet, meaning that there's still room for >error in our practice of using 64-bit residuals. But a specific >factor >is a _complete value_, not some truncation, and so its reliability is >not damaged by the incompleteness which you admit keeps the L-L >residues from being totally reliable - right? > >Then you wrote "so far no substantive errors in the database have come >to light", but seemingly contradicted that in the very next sentence, >"A very few (think fingers of one hand) instances of incorrectly >matched residuals have come to light - completing the double-check in >these cases proved that one of the recorded residuals was correct." >... And thus _the other_ recorded residual was _incorrect_. > >> Neither is there any evidence that any verified factors are >incorrect. > >Depends on the meaning of "verified", of course. :-) > >Will Edgington (I think) has reported finding errors in his factor >data base ... even though he verifies factors before adding them. >Mistakes happen. But I think the error rate for factors has been >significantly lower than for L-L residuals. > >> Whatever theory states, the experimental evidence is that verified >> factors are no more (or less) reliable than verified LL tests. > >Then why don't we triple-check factors as often as we triple-check L-L >results? Oh, wait ... depends on the meaning of "verified", again. > >> Suppose a taxi firm runs 10 Fords and 10 Hondas for a year. >[ snip ] > >Let's have an example in which the number and nature of the units is >closer to the gigabytes of data items we're slinging around. > >> Besides which, checking a factor takes a few microseconds, whereas >> checking a LL test likely takes a few hundred hours. > >... which tends to support my claim #3. > >> If anything goes wrong during a factoring run, we would be far more >> likely to miss a factor which we should have found rather than vice >> versa. > >... which is in agreement with claim #3. > >>> How does that compare to the observed rate of incorrect factors >>> discovered after triple-checking _them_? >> >> AFAIK no-one bothers to triple-check factors. > >... because we know they're more reliable than L-L results (#3 again), >based on our actual experience (claim #4) with them. > >So we seem to be in agreement about my claims #1-#4. I hypothesize >that your previously-expressed demurrers were related to unclear >wording on my part. Okay? > >Regards, >Richard B. Woods > > >_________________________________________________________________________ >Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm >Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers _________________________________________________________________________ Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers ------------------------------ Date: Thu, 6 Dec 2001 15:19:52 +0100 From: "Steinar H. Gunderson" Subject: Mersenne: Re: Factoring benefit/cost ratio On Wed, Dec 05, 2001 at 07:10:24PM -0600, ribwoods@execpc.com wrote: >Am I correct in interpreting this to mean that you >think that using 64-bit residuals is more reliable than using 16-bit >residuals? If so, then surely you'll grant that 256-bit residuals >would be even more reliable yet, meaning that there's still room for >error in our practice of using 64-bit residuals. Given that the chance of error (given that it is a random error and no program error) of a wrongly matched 64-bit residual is about 0.00000000000000000542%, I think you'll agree that a 64-bit residual would be sufficient. :-) /* Steinar */ - -- Homepage: http://www.sesse.net/ _________________________________________________________________________ Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers ------------------------------ Date: Thu, 06 Dec 2001 11:50:13 -0500 From: George Woltman Subject: Mersenne: More on M#39 Hi all, Another news story (a good one) a little ahead of schedule: http://news.bbc.co.uk/hi/english/sci/tech/newsid_1693000/1693364.stm In the last 24 hours, Guillermo Ballester Valor's Glucas run and Paul Victor Novarese's run using Ernst Mayer's program have completed and proved the number prime. I presume Scott will now try get the story really rolling in the press. Let's wish him luck. Congratulations to all, George _________________________________________________________________________ Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers ------------------------------ End of Mersenne Digest V1 #915 ******************************