Join PrimeGrid
Returning Participants
Community
Leader Boards
Results
Other
drummerslowrise

Message boards :
General discussion :
Mathematical Properties of Infinity
Author 
Message 

I was announcing the recent SR5 prime in my electronics club chat and a fellow member asked me what is the purpose of finding these fascinating primes. Then another member answered: to find all the primes, and before I could say up to a certain limit a coadmin mentioned that this was impossible.
This brought us upon thinking about whether finding all primes up to infinity is possible and whether infinity is prime. Has there been any kind of study done to this kind of topic? What defines infinity? The total expanse of our universe?
So far there has been no convincing arguments among us.
I am on the side that infinity is composite, since why not? (beside, at that large a number primes will be a lot rarer.
____________
SHSIDElectronicsGroup@outlook.com
waiting for a TdP prime...
Proth "SoB": 44243*2^440969+1
 


The problem with infinity is that outside of various useful kludges in mathematics (see the extended real number line), it isn't a number or a rate or a stopping point by definition. It's more of an extreme concept of "well beyond this point" that doesn't end. It gets more complicated still when you get to the aleph numbers and have to deal with the idea (among many others) that while both are infinite in size, there are more real numbers than natural numbers.
The universe is not conclusively known infinite or not, as best the research has revealed, and it may be unknowable. There are some questions of whether it is a 3torus, but even that would give finite volume despite the ability to just fly in one direction at a speed faster than light and never reach an edge. If we are truly limited to the speed of light, then in some respects the universe can be considered infinite because inflation has the space far away from us (beyond the observable universe) expanding faster than the speed of light. Given a vehicle capable of c, we would never reach the unobservable part; unless, of course, the "Big Crunch" theory turns out to be correct.
It has been previously proven that it is impossible to find all primes (even given infinite time). There are an infinite number of primes (Euclid, ~300 BCE), so "all of them" just can't happen. The secondary problem is that the data storage capacity of the universe (given the likely finite volume) is finite, so even if the most dense storage medium were to be employed (1 data unit per Planck volume), occupying every point in the universe, that's a hard limit of about 10^185 divided by the total number of digits to that point.
____________
Eating more cheese on Thursdays.  

mackerelVolunteer tester
Send message
Joined: 2 Oct 08 Posts: 2529 ID: 29980 Credit: 491,360,366 RAC: 138,213

The secondary problem is that the data storage capacity of the universe (given the likely finite volume) is finite, so even if the most dense storage medium were to be employed (1 data unit per Planck volume), occupying every point in the universe, that's a hard limit of about 10^185 divided by the total number of digits to that point.
If you write it out long hand. I'm sure we can push that up somewhat if we use more condensed notation.
Random thought: is there a way to prove that you can/can't express any single natural number in a fixed limited (finite) amount of data? (if so, what would the minimum amount of data be?). As numbers get bigger, they will tend to require more data to represent its value. But we can use methods to reduce that e.g. the biggest known prime is often written as 2^82,589,9331, and not the 24+ million decimal digits long hand. So I guess the resulting question then is, can we increase the density of representing numbers faster than the expanded number grows, such that it could be contained in a finite amount of storage?  

Yves GallotVolunteer developer Project scientist Send message
Joined: 19 Aug 12 Posts: 669 ID: 164101 Credit: 305,042,960 RAC: 0

whether infinity is prime.
If the infinite is prime, add one to it and you will find another infinite which is composite :)
The infinite is not an element of the set of natural numbers then it is neither prime nor composite.
 


The secondary problem is that the data storage capacity of the universe (given the likely finite volume) is finite, so even if the most dense storage medium were to be employed (1 data unit per Planck volume), occupying every point in the universe, that's a hard limit of about 10^185 divided by the total number of digits to that point.
If you write it out long hand. I'm sure we can push that up somewhat if we use more condensed notation.
Random thought: is there a way to prove that you can/can't express any single natural number in a fixed limited (finite) amount of data? (if so, what would the minimum amount of data be?). As numbers get bigger, they will tend to require more data to represent its value. But we can use methods to reduce that e.g. the biggest known prime is often written as 2^82,589,9331, and not the 24+ million decimal digits long hand. So I guess the resulting question then is, can we increase the density of representing numbers faster than the expanded number grows, such that it could be contained in a finite amount of storage?
To a degree, to be sure, but if you have a consecutive list of all known primes up to the point that the universe is full, most (almost all?) will be in a format that cannot be represented in a single extremely reduced number way, like the primes we search for can be (which is why we are able to determine their primality so easily). Of course, if you have the magic to store and read that much information that densely without containers, I'm sure you have the magic to set the information carrying particle to enough distinguishable states to contain more info than a single bit (much like MLC/TLC NAND).
Your random thought had me running to my college Number Theory book and notes! I think the answer would be "sometimes."
Among many options contained in the Fermat polygonal number theorem, my favorite (because it's the second easiest) is the Lagrange foursquare theorem. It states that any natural number (N) can be written as the sum of the squares of four integers (a,b,c,d): N=a^2+b^2+c^2+d^2. Square numbers have around twice the number of digits of their roots, very roughly speaking. So if the 4 numbers chosen for squaring (assuming that it isn't a number with only 1 set) together have fewer digits than the output, total data of the number can be reduced to an {a,b,c,d} representation. Because of the great distance between squares as numbers get larger, I would expect that the difference between number of digits of N and the leastdigit set of valid {a,b,c,d} would graph like an expanding sine wave.
Now, if latency is not an issue, consider this thought: Fermat's PNT is n ngonal based. So, 3 triangular numbers can also be used. If the numbers were represented as their sum of triangular numbers using a notation system, where each x(sub)i is the corresponding triangular number, you could have something like 17={1,3,4} and the retrieval system does a separate calculation (x*(x+1))/2 to turn 1, 3, 4 into 1, 6, 10 and produce the number. Such a system would have more potential after a certain point of more data savings, but I would still doubt whether it saves out in the vast voids between very large triangular values.
When having these thoughts, myself, I like to do an internal thought experiment of "how would this look with 3 digit numbers, and how would it look with 30000000000000 digit numbers?"
____________
Eating more cheese on Thursdays.  


Oh man, thank you mackeral. You have my brain and math degree in full gear!
Let us assume that since we have the magic to store great amounts of data in nearly infinitesimally sized spaces, that time is of less importance to us (if at all), so storing the information as compactly as possible is far more important than the time required to do the compacting:
A computer would determine the smallest (digitwise) ngonal sum of prime P (maximum n terms required, so at most 5 pentagonal numbers, 12 dodecagonal numbers, etc.) up to say the P2gonal value. (Pgonal is the number itself, P1gonal has a simple sum P1(2) + P1(1) which is always more digits). Then, whichever ngon progression has the least amount of digits plus the number of digits of n, total, is written out using n+1 terms. The ngonal number formulas are easily developed, especially considering that even though 10^185 is a big value, think about how many digits there are if you wrote out the numbers between 1 and 1,000,000. Magnitudes more storage required than most people realize.
So 17 could be written as {3,1,3,4} for triangular (3gonal) numbers 1,3,4 or {4,1,6} for square (4gon) numbers 1,6 (the two 0s being omitted), etc. and ultimately, the greatest amount of storage could be reasonably achieved, assuming that the "very few" primes of known compact formats (like Proths) are already determined in their own reduced formats.
Now, if you'll excuse me, there's a very popular gif meme for people who like doing stuff like this (most of us on PG, I would think) that I need to play at myself before stuffing myself into a locker. :)
____________
Eating more cheese on Thursdays.  

mackerelVolunteer tester
Send message
Joined: 2 Oct 08 Posts: 2529 ID: 29980 Credit: 491,360,366 RAC: 138,213

Oh man, thank you mackeral. You have my brain and math degree in full gear!
I consider it an achievement that the question I asked made enough sense to get that far.
Mathematics to me is like a foreign language. I'm not totally ignorant, and I know bits of it. Just not enough to have a good conversation with someone more knowledgeable in the area.  


If you write it out long hand. I'm sure we can push that up somewhat if we use more condensed notation.
Random thought: is there a way to prove that you can/can't express any single natural number in a fixed limited (finite) amount of data? (if so, what would the minimum amount of data be?). As numbers get bigger, they will tend to require more data to represent its value. But we can use methods to reduce that e.g. the biggest known prime is often written as 2^82,589,9331, and not the 24+ million decimal digits long hand. So I guess the resulting question then is, can we increase the density of representing numbers faster than the expanded number grows, such that it could be contained in a finite amount of storage?
You cannot compress all numbers. Some huge numbers like 2^82'589'933  1 can be specified with few characters, and you can invent all sorts of crazy notation (e.g. Knuth's uparrow notation) to specify huge natural numbers.
But no matter what you do, you can only specify a finite number of natural numbers if you allow only a finite amount of data (information).
If for example you can hold one bit of information for every elementary particle that exists in the observable universe, and you can use any handy notation or compression (zipping) algorithms, or any language to define numbers, then it is still the case that "almost all" natural numbers (which mean all but a finite set of exceptions) are impossible to represent.
Simple: If you have B bits on your "harddisk", you can distinguish no more than 2^B different numbers, and almost all natural numbers are out of reach.
/JeppeSN  

Dave Send message
Joined: 13 Feb 12 Posts: 2882 ID: 130544 Credit: 1,095,370,587 RAC: 812,455

For the sheer hell of it my C64 is going to work out the toral characters required to store 1~1M. See you in 4.5 hours.  

Dave Send message
Joined: 13 Feb 12 Posts: 2882 ID: 130544 Credit: 1,095,370,587 RAC: 812,455

& it's 5,888,896 & took 906423 jiffies or 4h12.  


& it's 5,888,896 & took 906423 jiffies or 4h12.
5.9MB?
____________
SHSIDElectronicsGroup@outlook.com
waiting for a TdP prime...
Proth "SoB": 44243*2^440969+1
 

Dave Send message
Joined: 13 Feb 12 Posts: 2882 ID: 130544 Credit: 1,095,370,587 RAC: 812,455

5.9MB?
Assuming 1 byte per chr. Or have 1 bit, or Planck volume, that flashes on/off for the required amount to indicate the number.
Currently doing to 10M. See you in 42 hours.  


5.9MB?
Assuming 1 byte per chr. Or have 1 bit, or Planck volume, that flashes on/off for the required amount to indicate the number.
Currently doing to 10M. See you in 42 hours.
But you do not need to redo 11M again, so 38 hours.
____________
SHSIDElectronicsGroup@outlook.com
waiting for a TdP prime...
Proth "SoB": 44243*2^440969+1
 

Dave Send message
Joined: 13 Feb 12 Posts: 2882 ID: 130544 Credit: 1,095,370,587 RAC: 812,455

But you do not need to redo 11M again, so 38 hours.
Dam LOL. Too late it's done it already.  


5.9MB?
Assuming 1 byte per chr. Or have 1 bit, or Planck volume, that flashes on/off for the required amount to indicate the number.
Currently doing to 10M. See you in 42 hours.
You know you can do it by hand in a couple minutes, right? Actually, already having done 11M, a few seconds.
____________
Eating more cheese on Thursdays.  

Dave Send message
Joined: 13 Feb 12 Posts: 2882 ID: 130544 Credit: 1,095,370,587 RAC: 812,455

You know you can do it by hand in a couple minutes, right? Actually, already having done 11M, a few seconds.
Maybe but I'm just going through the basic programming exercise for the inspiration. I see it's on OEIS anyway.  


You know you can do it by hand in a couple minutes, right? Actually, already having done 11M, a few seconds.
Maybe but I'm just going through the basic programming exercise for the inspiration. I see it's on OEIS anyway.
lol but the size of consequent numbers are different from 11m. So test 1m2m and then multiply.
____________
SHSIDElectronicsGroup@outlook.com
waiting for a TdP prime...
Proth "SoB": 44243*2^440969+1
 

Post to thread
Message boards :
General discussion :
Mathematical Properties of Infinity 