Wikipedia:Reference desk/Archives/Mathematics/2015 April 11

From Wikipedia, the free encyclopedia
Mathematics desk
< April 10 << Mar | April | May >> Current desk >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is a transcluded archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


April 11[edit]

fractals[edit]

Is it theoretically possible to represent ANYTHING as a tiny fractal algorithm that will eventually explain to hold the origine material? Like for example could you store The Dark Knight Rises in BluRay quality in a tiny mathematical equation that eventually expands into all the binary data require to show the movie? Curdleator (talk) 20:37, 11 April 2015 (UTC)[reply]

Yes, it's possible, but it's also meaningless. An algorithm is basically just a series of steps/rules. If you customize the steps/rules, you can get any result you want. That is, by deliberately customizing the algorithmic design and inserting deliberately-chosen constants/inputs/parameters, you can generate any set of data you want, including the exact stream of bits that constitutes the The Dark Knight Rises on BluRay. But all you're doing is fine-tuning, which is criticized in science for its lack of naturalness.
It's as meaningless as numerology techniques. You could apply a series of arbitrary rules to anybody's name to get out any arbitrary number or result. For example, the number 13 is often considered to be a lucky number, and the number 42 is claimed to be "the Answer to the Ultimate Question of Life, the Universe, and Everything" in certain segments of popular culture. Michael Jordan, considered one of the greatest basketball players of all time, has 13 letters in his name. Furthermore, his first name is 7 letters, and his last name is 6 letters, and 7 x 6 = 42. But it's not really supportable to conclude that his basketball achievements were predestined by the number of letters in his name.
The algorithm doesn't even have to be fractal. In general, for a set of n points (x1, y1), (x2, y2), (x3, y3), ... (xn, yn), you can generate a polynomial in degree (n - 1) that will fit all the points exactly (rather than just capturing general trends while avoiding an exact fit as polynomial regression does), but it's pretty meaningless since you're arbitrarily choosing coefficients to make it do so.
SeekingAnswers (reply) 20:57, 11 April 2015 (UTC)[reply]
But, in general, the algorithm is likely to be as long and complex as the stream of data it is designed to match. You might just be lucky and hit on an algorithm that generates exactly what you want in a short formula, but in other cases you will probably end up with a formula far longer than the specific output data stream it is designed to produce. Dbfirs 21:04, 11 April 2015 (UTC)[reply]
See also: Kolmogorov complexity.--Antendren (talk) 00:17, 12 April 2015 (UTC)[reply]
Is Kolmogorov complexity really applicable to specific problems? E.g., in my favorite computer language "DarkKnight", the algorithm to output The Dark Knight Rises is just "return $DarkKnightRises". It seems like, if we don't specify the background language, then the Kolmogorov complexity is only meaningful "asymptotically" (in some sense), by the invariance theorem. Sławomir Biały (talk) 12:30, 12 April 2015 (UTC)[reply]
You do need to specify the language in advance, but a single language can be applied universally. I could describe here in a few lines a simple turing-complete language that transforms bitstrings into other bitstrings. Then we could talk about the Kolmogorov complexity of every bitstring under this language. The complexity of The Dark Knight Rises will be high, and the complexity of a sequence of a billion 1's will be low. Of course, actually calculating (or putting lower bounds on) the complexity of a given string is difficult in practice. -- Meni Rosenfeld (talk) 13:21, 12 April 2015 (UTC)[reply]
The answer is no.
I think you're misusing the term "fractal" so I'll just assume the question is "is it possible to represent anything with a tiny algorithm". You can't, because there are more things than tiny algorithms. Let's say you want to represent any data of up to 1 billion bits, and you want to do it with an algorithm of up to 10,000 bits. There are things and only algorithms, so you can't match an algorithm to every thing.
Of course, in many domains it is the case that the things we are interested in are much fewer than all things. For example, valid code is a tiny fraction of the space of all text strings, and the code that could conceivably be written by a human is a tiny fraction of that. Compression algorithms work by exploiting the structure of these subsets of interest. The trick is to use short algorithms for the things we're likely to encounter, at the cost of very long algorithms for things we're not likely to encounter. For example, the word "for" is likely to be repeated many times in code, so it will be assigned a short token in the compression. But this optimization means that if the code does not include the word "for", the compressed length will increase.
Some strings simply can't be compressed no matter how hard we try. As linked by Antendren, Kolmogorov complexity is a way to measure that.
Specifically with video encoding, we can use lossy compression, which doesn't reproduce the original data, but something close to it within a given threshold (according to a domain-specific metric). Grouping together different data streams which are equivalent for our purpose, further reduces the total number of things we need our algorithms to be able to represent - therefore, reducing the total number of algorithms needed, and thus their length. -- Meni Rosenfeld (talk) 11:39, 12 April 2015 (UTC)[reply]
There are more things than tiny algorithms, but one can represent apparently complex things with a small set of rules. For instance, self-organized criticality. Consider, for example, a cellular automaton exhibiting some chaotic dynamics. Let's make a feature length bluray movie of that. The bytes in that movie are not necessarily directly generated by a cellular automaton (there is an encoding first into images and then into bytes). But ultimately the movie is generated by a simple set of rules. Now, it seems to me, that the cells do not necessarily "know" the rules that govern them (indeed, the rules may not even be "knowable" to the cells). Instead, it requires some higher-order system to model the automaton. Likewise, the world may be generated by some very simple set of rules so that, in principle, in some higher-order system we can run the universe as a simulation just up to the point when The Dark Knight Rises is filmed. "All the world's a stage... and one man in his time plays many parts." We limited beings aren't necessarily aware of the rules, and the rules may not even be discoverable, but this doesn't have to mean that they aren't there. Sławomir Biały (talk) 13:00, 12 April 2015 (UTC)[reply]
  • The keyword in the first point is "apparently". The blu-ray movie of the cellular automaton may appear complex if we don't know how to look at it (calculating Kolmogorov complexity is hard in general), but it is in fact very simple - it is generated by that specific cellular automaton. Of course there are arbitrarily long strings that can be represented by a comparatively tiny algorithm, but that wasn't the question - it was, whether anything can be thus represented. The specific movie you mentioned can, many other things can, but most can't.
  • You can easily write a short algorithm that will output a string which has TDKR as a substring, somewhere. But then you're tasked with the problem of specifying the exact location in the string where TDKR appears - this specification will require as much information as TDKR itself. I suspect that with this hypothetical simple rule that deterministically generates our universe, the total output will be so large that specifying which part corresponds to TDKR would require at least as much information as TDKR itself.
-- Meni Rosenfeld (talk) 21:49, 12 April 2015 (UTC)[reply]
One such algorithm would be to write down the binary expansions of each integer. Somewhere on this list would appear exactly TDKR. Fair enough. But I think some essence of the original question is missing then. Although it requires information to specify which space-time point TDKR appears, there are a great many points containing a high degree of "genuine information" (so low entropy? an entropy gradient?) That would not seem to be true of algorithms designed for the express purpose of generating all bits of zeros and ones, which would presumably tend to have very high entropy. In one TDKR "exists as information" and in the other, it does not. But I would have a tough time trying to formalize an argument like that. Sławomir Biały (talk) 22:09, 12 April 2015 (UTC)[reply]

Statistical significance of a low number of positives[edit]

Suppose 100 factories each produce 500 items. The items are checked for defects, and it is found that:

  • The average number of defective items per factory is between 0.5 and 1.
  • 99 of the 100 factories produced 2 or fewer defective items, but a single one of the factories produced exactly 3 defective items.

Now suppose someone argues that these statistics convincingly show that the 3-bad-item factory is a bad factory. He argues that:

  • no other factory reached the 3-bad-item threshold
  • the factory had a "huge" percentage increase (at least 50% increase in defective items) in defective items over any other factory
  • the results are unlikely to be attributed to random variance since we have large sample sizes of 500 per factory
  • the results are unlikely to be attributed to random variance since we have a large number of sample populations (100 factories)

Is his argument convincing?

(For the record, I don't think it is. These results still strike me as easily explained by random variation, but I'm having trouble articulating exactly what is mathematically wrong with his argument.)

SeekingAnswers (reply) 20:40, 11 April 2015 (UTC)[reply]

Please wait for a better answer, I'm only here because I asked the question below, but my statistics is a bit better than my geometry so here goes. Firstly if you set this up as a binomial distribution with n=100, and p=0.01 then you can use a normal approximation as a pretty good - well - approximation (general conditions are np>5 and npq>5 which is fine here). then with mean 1 and standard deviation root .99, you have a Z value of 2.01. This equates to this value being matched or exceeded 2.3% of the time. Whether this is within your critical region of significance depends on your priorities.
Secondly, and possibly more importantly this is not quite how significance test work. Rather than note something odd, then show that it's odd, they notice something odd, and test how odd it is: by repeating (many times), not by using the original data that lead one to the particular conclusion in the first place. (Otherwise almost all coincidences would be non-coincidental.)
So on the second point alone no, and on the first point it depends on your (or his) perspective and level of significance.81.156.215.185 (talk) 22:14, 11 April 2015 (UTC)[reply]
The things that are wrong are:
  • Point 3: Judging whether a sample size is large enough needs to be done in light of the probabilities/variances involved. Lower probability events require larger samples. Producing a bad item has a probability of about 0.2%; A sample size of 500 just won't cut it if you want to say anything meaningful.
  • Point 2: The "percentage increase" is also meaningless in this context. A 100% increase from 100 to 200 is nothing like a 100% increase from 1 to 2.
  • Points 1 and 4: No matter how many factories you have, there's a good chance there will be a factory with strictly more defects than any other (with the maximal number of defects increasing with number of factories). So neither the number of factories, nor the fact that the factory in question is a strict maximum, is meaningful.
But actually, I disagree with the premise of the question. It is not the case that arguments are by default correct, unless they do something wrong; rather, arguments are by default incorrect, unless they do something right. The argument uses handwaving rather than utilizing the relevant statistical tools. Those tools would be looking at the binomial/Poisson distribution that would result from the assumption all factories are equal, and considering the probabilities of the given deviations under these distributions. (And this would give you the rule of thumb that the severity of a deviation is roughly given by the difference of square roots of the expected and actual results, rather than normal difference or percentage difference).
Of course, if you do proper Bayesian inference, this data will serve as evidence that the factory is bad; but the evidence is pretty weak. -- Meni Rosenfeld (talk) 22:34, 11 April 2015 (UTC)[reply]
Going beyond the math a bit, I'd examine the 3 defective items to see if there's a commonality. If all three have the "whatsit notch" cut in the wrong location, then perhaps recalibrating the "whatsit notch cutter" would be in order. StuRat (talk) 23:25, 11 April 2015 (UTC)[reply]

Why don't you tell us the exact value of the average number of defective items per factory, rather than the information that it is between 0.5 and 1? We need it to compute the poisson distribution. Bo Jacoby (talk) 07:00, 12 April 2015 (UTC).[reply]

Assuming that the average number of defective items per factory is 0.5, then the actual number of defective items will be =0 for 61±8 factories, =1 for 30±6 factories, =2 for 8±3 factories, and =3 for 1±1 factories. This is assuming that the differences are merely statistical variations.

  6j2":N,.(,.%:)100*(L^N)*(^-L=.0.5)%!N=.i.5
 0.00 60.65  7.79
 1.00 30.33  5.51
 2.00  7.58  2.75
 3.00  1.26  1.12
 4.00  0.16  0.40

Note that the number 500 items per factory does not enter the calculation. Bo Jacoby (talk) 11:32, 12 April 2015 (UTC).[reply]

The argument is not remotely convincing. If the mean number of defectives per factory is 0.5 (the bottom of the range you give), there will be a factory with 3 defectives more often than not. Maproom (talk) 20:03, 12 April 2015 (UTC)[reply]
Exactly! Bo Jacoby (talk) 20:18, 12 April 2015 (UTC).[reply]

Edge central angle of a tetrahedron[edit]

Resolved

Hi. Not sure I've called it the right thing, but the angle COA on the tetrahedron (sorry, don't know how to link) page. I'm trying to work out how big this is. I mean I know how big it is, but working it out has bugged me for a while. After a few blind alleys I tried labelling all points as 3D co-ordinates - initially with 15 unknowns - and using equal lengths of various sides etc to eliminate most of them. Setting the center as (0,0,0) helped a lot, but I keep ending at stuff like 1 = 1, or a = a. True, but decidedly unhelpful. I'm thinking that maybe when I use the cosine rule, I'm taking the positive root always, when sometimes I shouldn't be. Anyhow. I don't want a solution (please), but the merest push as to is this the way to go? I notice from the (page I don't know how to link to) that the face-edge-face angle is complimentary, so is this the way to go? Many thanks. 81.156.215.185 (talk) 21:41, 11 April 2015 (UTC)[reply]

I'd start with the coordinates of an equilateral triangle in the 2D plane. Then augment the coordinates with a z-axis value of h, and the 4th vertex will be . Then pick any of the 3 original vertices, and equate its distance from the 4th to 1 - giving you a single equation to solve for h.
To link on Wikipedia, use double brackets - [[tetrahedron]] becomes tetrahedron. -- Meni Rosenfeld (talk) 21:53, 11 April 2015 (UTC)[reply]
Thanks, and thanks, but how do I know the '3h' is 3/4 of the 'height'. I've missed that.81.156.215.185 (talk) 22:32, 11 April 2015 (UTC)[reply]
If you take the average of the 4 coordinates, you get the center, which we want to be . -- Meni Rosenfeld (talk) 22:40, 11 April 2015 (UTC)[reply]
oops. thanks again. how do I mark this resolved?81.156.215.185 (talk) 22:57, 11 April 2015 (UTC)[reply]
You can put in {{Resolved}}, which I've now done. But simply saying it's resolved is also fine, this reference desk is fairly low-tech as far as Q&A sites go. -- Meni Rosenfeld (talk) 23:08, 11 April 2015 (UTC)[reply]

Can anyone find a solution to this problem??[edit]

Name a set of prime numbers for which the sum of the reciprocals of all numbers divisible exclusively by those primes (including numbers not divisible by all of them) converges to 4.

For a limit of 2, we simply have 2 (that is, the sum 1 + 1/2 + 1/4... converges to 2.)

For a limit of 3, we have the set {2,3}. This is correct because the sum of the reciprocals of all numbers of the form 2^a * 3^b converges to 3.

For a limit of 4, it can be tough. The set {2,3,5} is wrong because the limit is only 3.75, not 4. And for sets of the form {2,3,p}, as p increases, the limit gets smaller and smaller and approaches 3.

Can anyone find a set of primes that meets the criteria I want?? Georgia guy (talk) 22:53, 11 April 2015 (UTC)[reply]

I'm fairly sure this is impossible for a finite set. The result is the product of for all primes in the set. To result in 4 with an integer denominator, the numerator must be divisible by 4, which is impossible since the only even prime is 2.
I'm also fairly sure that with an infinite set it's easy, at least in theory. Go over all primes in order, and add any prime which doesn't make the product go above 4.
Trying out this technique gives a list which starts {2, 3, 5, 17, 257, 65537, 4294967311, ...}. If all Fermat numbers were primes they would constitute an appropriate sequence, but alas, they are not. -- Meni Rosenfeld (talk) 23:17, 11 April 2015 (UTC)[reply]
In fact all integer sums except 1, 2, and 3 are impossible with a finite set of primes. If 2∉S then the numerator Πp is not divisible by 2 and the denominator Π(p−1) is divisible by 2|S|, while if 2∊S then the numerator is not divisible by 4 and the denominator is divisible by 2|S|−1. That rules out all cases except ∅, {2}, and {2,p}, and clearly the only {2,p} that works is {2,3}. -- BenRG (talk) 05:55, 12 April 2015 (UTC)[reply]
The following Mathematica code generates the list for any target; either until the target is reached, or until the list reaches a specified length.
 PrimesList[target_, length_] := Module[{prod = 1, test = 2, list = {}},
  If[target < 2, test = 2 Floor[(target/2)/(target - 1)] + 1];
  While[
   Length[list] < length && prod < target,
   If[PrimeQ[test] && prod*test/(test - 1) <= target,
    AppendTo[list, test]; prod = prod*test/(test - 1);
    If[prod < target,
     test = 2 Floor[(target/2)/(target - prod)] + 1
    ],
   test += 2]
  ]; list
 ]
PrimesList[4, 10] gives
{2, 3, 5, 17, 257, 65537, 4294967311, 1229782942255939601, 88962710886098567818446141338419231, 255302062200114858892457591448999891874349780170241684791167583265041}
-- Meni Rosenfeld (talk) 23:37, 11 April 2015 (UTC)[reply]