As regular readers have no doubt noticed by now, posting on the blog
has been slow lately. I’ve been trying to come back up to speed, but so
far, that’s been mainly in the form of bad math posts. I’d like to get
back to the good stuff. Unfortunately, the chaos theory stuff that I was
posting just isn’t good for my schedule right now. Once you get past
the definitions of chaos, and understanding what it means, actually
analyzing chaotic systems is something that doesn’t come easily to me – which
means that it takes a lot of time to put together a post. And
my work schedule right now means that I just don’t have that amount of
time.
So, dear readers, what mathematical topics would you be particularly interested in reading about? Since I’m a computer scientist, my background obviously runs towards the discrete math side of the world – so, for the most part, the easiest topics for me to write about are from that side. But don’t let that limit you: tell me what you want to know about, and I’ll take the suggestions into consideration, and figure out which one(s) I have the time to study and write about.
I don’t want to limit you by making suggestions. I’ve tried that in the past, and the requests inevitably end up circling around the things I suggested. But I really want to know just what you want to know more about. So – fire away!
I seem to recall you were working with Haskell in the past. If my memory is correct, perhaps you can delve into monad transformers a bit. Those always did confuse me a bit.
Considering your interest in music, maybe some posts on algorithmic music generation?
Since these are areas where I work, I’d be very interested in anything you have to offer on algorithms to do with various forms of hashing (e.g. collision properties), consistent hashing and DHTs, parity and FEC/erasure codes, information dispersal, etc. There’s plenty of material where this stuff touches on cryptography, but outside of that it’s hard to find writing that’s accessible without being superficial, and that’s where you seem to excel.
How about wavelets?
What about some math education topics e.g. what do you think is the most effective way to teach kids fraction addition and multiplication?
Or is there anything you think that’s bad in the current syllabus (primary/middle schools)?
I’m still curious to hear your take on Compressed Sensing, particularly in addressing this (old) Wired article .
Yes, let’s hear about math education, like “traditionalists” vs. “reformers” (the “Math Wars”). Or maybe that’s old news…
I’d love to hear your thoughts on this:
http://news.ycombinator.com/item?id=1466164
In particular, what a laymen should do to avoid believing a statistical analysis is yielding meaningful results, when instead they are just seeing coincidental patterns in data.
Different types of logics was a topic you covered that is very useful, interesting, and not well covered at a non-professional level elsewhere.
I am beginning a self study, will begin with your articles, for example
http://scientopia.org/blogs/goodmath/2009/04/a-quick-bit-of-temporal-logic-introducing-ctl
1 – Describe the discipline of Topology and give some examples of it’s application. I’ve got a math degree and I never really got this one!
2 – Ditto with Category Theory. Seems like Category Theory is to Abstract Algebra as Abstract Algebra is to Classical Algebra, but Wikipedia is only taking me so far…
(BTW – The suggestions posted higher in the comments are great and probably of a wider appeal. But I love me some seriously abstract math… 🙂 )
I’ve been doing a lot of study on my own about bayesian modeling, specifically graphical mixture models. If you have anything to share about that, I’d appreciate it.
How about addressing if 1 is a prime number.
Only messing of course, I’ll go now…
I’ve long pondered a simple question that I cannot seem to formalize well enough to answer and thought it could be good fodder:
Assume a sphere (e.g. an idealized Earth) free floating with no external reference frame. It is clear that you can get it to spin on one axis (relative to itself) because the earth rotates on its axis. Is it possible to have two axes of rotation by imparting a torque not perpendicular to the axis of rotation or do you wind up with a single axis that is a vector product of the two spins? I think it is the latter but don’t have the math chops to know.
I second Jeff Darcy’s comment. Doing something related to hashing would be interesting.
Perhaps you can talk about some multidimensional database stuff like UB-trees, R-Trees, etc.
This is probably so far off as to be comical but I came up with that suggestion using the following sequence of logic.
Dr. Chu-Carroll works for Google. Google, among other things needs to mess with gigantic multidimensional databases. Therefore, Dr. Chu-Carroll might know something about them.
Tarski-Banach theorem. I read about it at school, and was disappointed that it was not covered at university.
I have read (and now with the second edition) have re-read Nassim Taleb’s Black Swan. He is clearly (and without subtlety) arguing for a fundamental shift in the application of mathematics to economics (if not life). As narrative I’ve found his book astoundingly fun to read. But I do wonder about the “seams” in his work. You have touched on some of his themes in the past. Is there a credible argument (for/against) the foundation of his assumptions around risk and analysis? He disses everything from bell curves to asymptotical assumptions about anything. His intellectual arguments, which most recently have shifted toward robustness, as of today, seem unchallenged at least in public forums.
How about Bayesian probability? I’ve been reading about it a bit on places like the Less Wrong blog, but I’m interested in hearing more about *why* it’s effective.
Combinatorics! Number Theory!
Machine learning?
How about “random matrix theory”? New Scientist featured it a couple months ago and it sounded fascinating, but they didn’t get into the underlying math at all. The article is at http://www.newscientist.com/article/mg20627550.200-enter-the-matrix-the-deep-law-that-shapes-our-reality.html. From the lead paragraph:
“SUPPOSE we had a theory that could explain everything. Not just atoms and quarks but aspects of our everyday lives too. Sound impossible? Perhaps not.
It’s all part of the recent explosion of work in an area of physics known as random matrix theory.”
Well, I’m a bit biased bit I’d like to some explanations on analyzing algorithms for complexity. My thesis was developing algorithms do some things related to group theory and algebraic topology, and I’d like to be able to analyze them for complexity (maybe get another paper out of it!). When I get the time I’m going to get a book and just work through it, but seeing how someone from CS things about it would probably be useful.
What about the Continuum Hypothesis?
Third the math education suggestion.
Since you’ve already invested some time in topology the following theorems may not take to long to get your head around and they are fascinating results, speaking as a math grad student. The theorems are from Topology by Munkres.
The Urysohn Lemma: Let X be a normal space, meaning given a pair of disjoint closed sets you can separate them using disjoint open sets; let A and B be disjoint closed subsets of X. Let [a,b] be a closed interval in the real line. Then there exists a continuous map f:X->[a,b] such that f(x)=a for every x in A, and f(x)=b for every x in B.
Which is used in the proof the Tietze Extension Theorem: Let X be a normal space; let A be a closed subspace of X. Then
(a) Any continuous map of A into the closed interval [a,b] of R, the real line, may be extended to a continuous map of all of X into [a,b].
(b) Any continuous map of A into R may be extended to a continuous map of all of X into R.
These seem obvious and are more or less if X is a subset of R but to show they hold true in greater generality takes some inventiveness and at least left me in a state of awe and a bit of confusion.
Another is the Tychonoff Theorem: An arbitrary product of compact spaces is compact in the product topology.
The proof uses Zorn’s lemma which is a cool tool in of its self but even better the proof in Munkres is awesome. But the proof uses the basis of a topology and the finite intersection property which will increase the amount of explaining you have to do before hand.
There was an article in New Scientist recently about random matrix theory (http://www.newscientist.com/article/mg20627550.200-enter-the-matrix-the-deep-law-that-shapes-our-reality.html), but it didn’t go into much detail. How about an analysis of that?
Banach–Tarski paradox along with a discussion of the axiom of choice might be fun.
Perhaps you could discuss that status of the Wolfram 2-state, 3-symbol Turing machine. Wolfram and co claim a proof has been found, but Vaughn Pratt disagrees, it seems: http://en.wikipedia.org/wiki/Wolfram's_2-state_3-symbol_Turing_machine
Or, more generally, an analysis of some of Wolfram’s recent work, which, on the surface, show signs of crankishness.
How about talking about text search algorithms? I have always found the basic algorithms (prefix/suffix trees, tries) to be interesting, although the variants for error-tolerant searches get a bit complex. You’re probably already familiar with the basic document-search applications, but I find that it turns up in surprising places. I’m in bioinformatics, and we use variations of text searching for a wide variety of things, from gene searches to identifying proteins. I can provide a few good, clear references if you’re interested.
How about computability? The whole field fascinates me somehow, even if I don’t understand a thing about it. Recursive functions, primitive-recursive functions, computability, hypercomputation — all this stuff sits inside my head as meaningless words, begging for someone to actually tell me what they mean.
Mark: I did a presentation on one of the classic proofs of Dirichlet’s theorem on primes in arithmetic progression at the end of my undergraduate days. Dunno if you can turn it into a blog post, but I dug up the slides recently so could presumably reconstruct the rest of the presentation and give you a rundown in person of the proof if you want. (The presentation took a full hour, and was pretty much just the proof, but although it’s potentially lots of material it doesn’t really require much beyond what you get in two terms of calculus as far as mathematical background)
+1 for Algebraic Topology. I’m studying up on this as I prepare to dive back into higher mathematics. I’ve always found your explanations of numbers and sets to be particularly effective.
How about explaining what is the Calculus of Variations?
+1 to Category Theory, and if you know something about it, higher dimensional categories.
Or, applications of category theory to haskell programming. I’m specifically thinking of a video I saw on untangling tangles: http://www
Wavelets would be good. Bayesian inference as related to Kalman filters and particle filters would be interesting.
Group theory or algebraic topology with a focus on how it’d be useful for either computer science or programming would be interesting (I’ve studied both, but didn’t really find much of a practical connection at the time).
Manifolds. Explain them to us nonspecialists.
Anything you can share about artificial intelligence would be interesting.
Kolmogorov complexity!
Machine learning would be interesting too.
Random matrix theory would be a fun topic. Once you’ve explained what an “eigenvalue” is, then demonstrating the Wigner semicircle law would be pretty easy (proving it would be harder).
That New Scientist lead paragraph makes me want to hurl, but I guess that’s just par for the course.
i like the stuff like that ulam’s spiral article you did last week. the weird, interesting world of math that a layperson can marvel at without needing a degree in higher mathematics, i guess 🙂
I would like to hear your thoughts on math education as well – in particular its current state in the US. Are you familiar with Math for America?
Anything you could share on general strategies to computer things faster would be very interesting; general concepts like divide&conquer or something like that.
I’m not at all sure whether it’s your cup of tea, but I’ve always longed to understand differential forms…
My ideas were already addressed, but let me echo them:
computability
what makes a process fast (or slow)
what makes a computer fast (or slow)
I have a reasonable, superficial sense of these topics. What I like about your posts on this sort of stuff is that they allow me to understand a few steps deeper.
I have some understanding of the mathematics of category theory, but I know nothing of its computer science applications. I would like to learn something about that topic.
I’d love to see a couple of posts on Bayesian statistics, and specifically on computational approaches to Bayesian data analysis (how exactly does that WinBUGS program work?).
Some topics I’d find useful in my work: (1) Educating at the K-12 level, especially what topics or methods you would advocate for the 9-12 range. (2) Updates on computer security for the home-user. (3) Issues in the philosophy of math or computer science.
Data Mining for Dummies. Or why we should stop giving out information with our store discount cards because this lets Publix know too much about us.
Some that have been mentioned that I’d like to second —
1. Banach–Tarski Paradox
2. Machine Learning
3. A good explaination of an “eigenvalue”
4. Combinatorics
5. Number Theory
I ranked them in order of my interest.
Oh yeah — A good explanation of the Abel–Ruffini theorem and Galois groups.
How about something about Structured Operational Semantics? Doesn’t that also fit in parts of your work?
Cantor.
Thanks for asking.
Yes to Calculus of Variations, and other optimization related topics. How about SVMs?
My vote goes to anything related to prime numbers. Got to give them some love!
By the way, the post on Ulam’s Spiral was great!
As a long time lurker I am wondering what your thoughts on http://mathoverflow.net/ are. Say, compared to stackoverflow (e.g. the way they are trying to keep it research only) or with respect to cranks and the way they are (or are not) handled; for a recent Cantor crank see http://mathoverflow.net/questions/29879/has-this-paradox-been-known-in-literature-who-was-the-first-to-state-it-what-is
Combinatorial Optimization.
Every introduction to this topic I have read has been written for grad. students. I would love to see an introduction for people at the undergraduate level.
I would be interested in your thoughts on several topics.
Simulation. You’ve discussed simulation a few times in the past, but I would be interested in your thoughts about the limits of simulation, particularly from a mathematical perspective. There are a lot of opportunities for both discrete and continuous simulation and modeling in what I do, such as simulating input signals for testing, FEA and CFD modeling and simulating market demand response. Aside from the obvious problem of “garbage in; garbage out,” I am often concerned that solutions may be constrained by the tools. What characteristics or characteristic results can we expect of simulations, given the math involved?
Applications of signal analysis, especially relatively simple applications. I used to work with a guy who was an expert in signal analysis, having developed targeting systems for the military, optical measurement systems and predictive maintenance algorithms for industrial equipment. He had some fascinating stories, but almost no ability to explain the how of signal analysis. I’ve thought, ever since, that there are many, many potential applications in industry, but never found a good introduction.
The implementation and use of genetic algorithms. I believe that much product (and probably process) design work could be completed with genetic algorithms, but while I understand the basic idea of how they work, so far the skills needed to actually code them have escaped me. What tools are used? How are the data structures designed and manipulated? What sort of problems are amenable to solution by genetic algorithms, and which ones are not? Looking at your post on the “bent paperclip” antennae, it seems like any problem, with almost no constraints, could be solved with GA, but clearly they had constraints on the structure of the antennae: segments probably had to be of finite length, segments had to be contiguous, there was probably a limit to the number of segments and of course there’s the response to radio signals. Easily put into words, but what sort of data structures and code would we be working with?
I’d like to second Tim (comment 26) and ask for a post on Banach-Tarski + axiom of choice
Eigenvalues, functions, PCA and ICA
One way for me to track what I’m interested in about Mathematics is what Mathematics Fiction that I find myself writing.
0. Preface
Since before my double B.S. in Mathematics and English Literature from Caltech, 1968-1973, I was already passionate about both fields, and had already done work in each for which I was paid. Professionally, I am a Mathematician and an Author and a Teacher. The intersection of Math and Fiction in the Venn diagram of my conscious life is major.
Hence my novels and short fictions about Mathematics are particularly important to me, in the same sense as my other indices of this type suggest, i.e. “My Life in Spaceships”). The fiction listed below adds up to a book (385 pages, 92,650 words, as shown in grand total, encompassing 10 short stories , 2 actually novellas and 2 novelettes).
In the below indices, I omit my fiction and poetry which is primarily about Science of one kind or another, or Aerospace Engineering, albeit Math appears as a tool in a minor way in each. I also omit version history and submission history, as those appear in other indices.
1. My Mathematics Fiction
A. Short Fiction
(1) “Garrett Lisi’s Exceptionally Simple Theory of E8 Stardrive”
Draft 2.3 of 31 Mar 2010, 33 pages, approx, 7,100 words, [+2 cites; fixed 2.0 15 typos]
[ ] submit to contest before end of June 2010, as #1 priority of Math Fiction submissions.
This was triggered by a contest:
http://crossedgenres.com/simf/contest/rules/
Mathematical Subject: Lie Algebra
(2) “Exotic Smooth Manifolds and a Better World”
Draft 2.0 of 5 p.m. 2 April 2010, 24 pages, approx, 5,700 words; useful rejection letter from Stan Schmidt at Analog, submitted 9 Apr 2010, dated 9 June, postmarked 23 June, rec’d 25 June 2010; next to: ___.
Mathematical Subject: Topology
(3) “Norbert Wiener: Wolfe in Sheep’s Clothing” [file: AxiomaticWolfe], 20,850 words, 75-page incomplete Draft 4.0 of 25 June 2010 added 4 of 10 pages handwritten notes on Theoretical Magico-Physics, and corrected numerous typos; discussed phase-diagram issues with CMC at dinner out 26 June 2010;
[replaced 18,500 words, 66-page incomplete Draft 3.0 of 10 Feb 2009]
Mathematical Subject: Biography/Math History
(4) “Sex, Savagery, and Semiprimes”, 4400 wds, 23 pages, [file: Rape]
Mathematical Subject: Number Theory, Biography/Math History (Sofia Vasilyevna Kovalevskaya (15 January [O.S. 3 January] 1850 – 10 February [O.S. 29 January] 1891).
(5) “War Between the Numbers”, (2,600 wds), 17 pp., [WarNum.doc]
Mathematical Subject: Number Theory
(6) “Open Source Grave”, 9,400 words (4 of 6 chapters completed, estimated 13,500 when done), 49 pp.
Mathematical Subject: subjective experience of becoming Mathematician;
Quantum Computing
[I can email you the text of any story whose title or subject grabs you]
It would be awesome if you wrote a detailed post about file compression, since you’ve disproven claims about it before. It would be interesting to see some compression methods that actually work.
I would like to see your take on techniques for search tree pruning or Discrete finite automata. I am slowly working my way through hopcroft’s automata theory and I am having a good time.
Algebraic Number Theory, Complex Analysis, Graph Theory (specifically Ramsey Theory and various graph algorithms) are all pretty interesting to me.
How about the developing connection between Riemann surfaces and finite graphs?
What Mark (#36) said: AI.
Plus, your thoughts about what Doug Hofstadter wrote about it: if and how intelligence depends on creativity, making errors and being fault-tolerant of them; reflection, thinking fuzzy (but in a good way), juggling concepts and roles (e.g. the varying roles in the concept first lady).
And how to implement all that without using 2 kg of squishy grey matter. 🙂
I’ve always found Polya’s Enumeration Theorem and Burnside’s Orbit theorem to be pretty cool and easy to cover (assuming very elementary knowledge of group theory). Or for those who don’t know group theory, symmetry of the square can be a rather interesting concept. One can connect both of these to Galois theory with the polynomial x^4 – 2.
Concurrency. Any insights on how to design software to work well on distributed resources.
I was looking at bayesian modelling and OWL (Web Ontology Language). I was interested in the question of how to model things such as “this fact was asserted by these data sources, and I trust them to not falsely assert a fact this much. How much do I belive it now?” I’m working with a project doing data aggregation in semweb/RDF land, and they really, really need a model for provenance of RDF triples.
I would really appreciate some perspective on getting jobs on maths/compsci, since you’ve been through it yourself – I’m entering the final year of my undergrad degree and am still undecided on what the hell to do with it. Questions like how important postgrad study is versus real life experience never seem to get a satisfactory answer since obviously academic advisors have an agenda to sell. And I seem to recall that you actually have experience in recuting new employees when you were at IBM?
If you’d rather be posting actually /about/ maths then decision processes such as optimisation or game theory would definately be welcome. Decision problems are some of my favourites because because they are right on the line between mathematically interesting, and possible to explain to a non-scientist.
Another vote for Taleb.
How about Measure Theory and its applications to the area of Finance?
Insofar as my meager understanding comprehends, machine learning, genetic algorithms, and data mining all overlap with the discipline of Artificial Intelligence. I’d also suggest something with respect to AI, fuzzy sets, neural networks, genetic/memetic/evolutionary algorithms, data mining, etc.
With respect to 63, if Douglas Hofstadter really meant to say such things, I feel baffled as to why hasn’t he more talked about fuzzy subsets, fuzzy logic, and fuzzy mathematics in his books, and why hasn’t he looked for more information on those subjects? Or has he?
With respect to mathematics and fiction, what would exp(1) say if it could speak? What would pi say if it could speak? What would the rationals say to the reals? Would they call them libertines and think that reals lacked discipline in their social structure? Would the complex numbers think the real numbers sinners for abandoning their imaginary nature (the other way around)? Would the positive numbers think the negatives heretics for abandoning the Magnitude Creed? That one may sound funny, but remember Kronecker “God made the Natural Numbers, the rest is the work of Man.” (not an exact quote).
Would the lines, planes, cubes, etc. feel the Groups, Fields, Monoids, SemiGroups, etc. deserters for leaving the Beautiful, Blessed land of Geometry? Would the integrals feel it a barbarism that someone dared say they had a common ancestor with the derivatives and spill mountains of ink over such and oppose such a teaching in courses on Calculus for beginners?
You can address Meyer’s latest definition of ID: http://www.evolutionnews.org/2010/06/stephen_meyer_describes_his_de036251.html
I have recently heard from someone else, who heard it from NPR (so take this with a HUGE grain of salt) about some miraculous “Eureka program” that managed to come up with relativity after observing a pendulum for 3 weeks, and some very complicated math that accurately describes the workings of a cell just by watching some for a few weeks as well. It sounds way too awesome to be true to me, so I assume that’s the case. If you’ve heard otherwise I’d love to hear it. I just haven’t seen anything else online about it.
Mark, I’d really like to find out more about some recent encryption developments in the ‘Finite Domain’ area, involving ‘secure random permutations’ using block-ciphers, specifically encrypting say a 16 digit credit-card number to an encrypted 16 digit number so that the original source number can be decrypted. (Format preserving encyption, thorp shuffle, Feistel Finite Set Encryption Mode-FFSEM, FFX), Work done by rogaway and black (Ciphers with Arbitrary Finite Domains) and such-like. There is very little of practical information available in this ‘new area’ of cryptology as almost all encryption is directed towards large string-based messages.
Mere suggestions:
* The book, Gödel, Escher, Bach
* Kurt Gödel’s Incompleteness Theorem
* The Millennium Prize Problems
* The Riemann hypothesis
* Artificial Intelligence research (is it B.S.?)
* Computational fluid dynamics
* SETI (Not your field, but I’d like to hear your thoughts)
* Simpson’s Paradox
Presburger arithmetic.
How about limits of concurrency. My colleagues and I have been arguing lately about the the following statement.
All computational problems in P can be solved using parallel algorithms, this can be proven for any sequential algorithm by scaling the problem size. So one can, and has argued, that “all P problems in CS can be solved with a parallel method, it’s just a matter of scaling the problem size.”
Cheers.
Game theory, linear (and maybe non-linear) programming, that recent decipherment of that ancient Semitic language by computer, a discussion of prime factoring being in P and why it is not NP-complete.
This is way off topic but quite interesting. Could you consider writing an article summarizing your experiences and views of different rulesets for RPGs? Like GURPS, D20 and some of the more obscure ones. Maybe some design thoughts of your own, if your like me you have probably developed a few of your own.
My local paper just carried an article, syndicated from a major US paper, about the “problem” of valedictory inflation–US high schools often have more than one Valedictorian, sometimes dozens of them. For non-USAians: the highest-ranking graduate (highest grade-point average) gives a valedictory address at the graduation ceremony. GPAs traditionally were calculated on a four-point scale, A=4, B=3, etc. Ambitious students aspire to a 4.0 GPA. With or without grade inflation, a large school might have more than one 4.0 valedictorian. The introduction of advanced placement classes (intended to be college level, and accepted as college credits by many colleges/universities) created a bit of a problem. A potential 4.0 student might avoid AP classes, or particular ones, to maintain the “perfect” GPA. Someone had the bright idea of scaling AP classes, A=5, B=4, etc., and this seems to be a near universal practice. If a school has four AP classes, and assuming six classes per year for four years, we have a straight-A GPA of 4.167. If a student took fewer AP classes than the four offered, they rank lower. No problem, right? Just honor the winner(s). I don’t know if this happened after the rescaling of AP grades, but here’s where good math / bad math comes in. Suppose there are two students with the maximum number of AP courses, both with straight A’s. But one student is more ambitious, and wants to take extra courses (four years of French, Orchestra, etc.); the usual way to do this is to take care of some required or low level courses in summer school. The result: every extra (regular) course taken lowers the GPA!
In the unlikely event that any school board members or school administrators are reading this: imagine a student who took four AP classes and a million regular classes. Without using a calculator, what would the GPA look like? In practice, it takes only one additional class to lower a GPA as recorded; a student could even drop his last period class senior year and go home early, if 24 credits are not required, and raise his GPA. Hence the practice of all 4.0’s and higher as co-valedictorians. If there is one or more above 4.0, that could be the standard, eliminating the mere 4.0’s; I don’t know if this is done in practice.
I’ve heard of examples of specific algorithms exploiting qubits, but is there any work on higher order languages for quantum computation? Can a quantum scheduler use entangled task calls?
A good idea might be to do a series of posts regarding a book in particular, as it has been done many places/many times with SICP, you can select a book and you would tutor us and we can discuss the chapter/exercises in the comments.
I’m intrigued by the theory and algorithms behind constraint satisfaction. I was first introduced to the concept in college when a friend praised Sudoku for its addictive qualities, but groused that “This is what computers are for,” (particularly given how much time he was spending solving puzzle after puzzle).
statistical analysis of spherical, directional data
this has nothing to do with my dissertation problem…
I’d be interested in hearing what you have to say about some of the current research into the neuroscience and psychology of mathematical reasoning. Even off the cuff reflections would be appreciated. Not a bad place to start might the exchange between Alain Connes and J-P Changeaux in “Mind, Matter and Mathematics.” (ISBN 0691004056) Butterworth, Dehaene, Spelke and Carey make interesting reading also.
I read somewhere that pi expressed as a negative base could be written as a repeating decimal. I have wondered about the details.
How about writing some articles on building simple web based applications using PHP or Java? I am looking to learn how to build calculators for students to use for my tutoring website. The code could be made available on your site for those interested.
It’s not strictly about math, more physics, but still interesting to do the math on this one:
http://jalopnik.com/5556198/wind+powered-car-proves-internet-naysayers-wrong
especially since it is sponsored by google 😉
I’d also like to hear your thoughts on Presburger arithmetic (see 75 above).
Presburger arithmetic contains only the addition operation and equality, whereas Peano arithmetic also contains the multiplication operation. Yet, Presburger arithmetic is a decidable theory, whereas Peano arithmetic is not. What is it about adding multiplication that causes it to become undecidable?
Tendency to over use certain models/methods due to their ease of computation or wide spread use. Examples: Normal Curve, Kalman Filter, Brownian Motion, Fourier series.
In particular, the statistical assumptions above blew up in the recent financial environment (also Long Term Capital).
Finance and Economics are full of questionable assumptions and math.
How about something on “The Unreasonable Effectiveness of Random Number-Generators?” How much randomness do we really need? Indeed, why does hashing almost always work so well?
(And are there other things — besides time, space, randomness, and oracles — which we should consider as resources when we think about computational complexity?)
Oh, and Gröbner bases.