<![CDATA[What is Math?    - Blog]]>Thu, 19 Oct 2017 11:03:49 -0700Weebly<![CDATA[The Sense in Which I'm a Platonist]]>Mon, 02 Nov 2015 18:16:05 GMThttp://fieldcady.com/blog/the-sense-in-which-im-a-platonistOk, so I'm most definitely not a Platonist in any woo-woo, quasi-religious sense; mathematical abstractions are not "real".  On the other hand though, many of them do seem to be "natural".  Let me be very concrete: I believe that if humans ever meet a technologically sophisticated alien race, they will have a number system that matches ours.  I doubt they will have the Peano axioms of arithmetic, or the idea that the integers are polymorphic functions.  They might not even care about prime numbers.  But the aliens will have arrived at numbers as we understand them.  I also believe they will understand the difference between spheres and donuts, but I doubt they'll express it in terms of pointset topology.  In this sense certain math abstractions are the "right" way to describe the universe, and things that we prove about them can be taken to be "true".

Sorry for the purely philosophical post - I usually try to be more real-world than this.]]>
<![CDATA[Is Science Really for Public Consumption?]]>Mon, 14 Sep 2015 01:28:12 GMThttp://fieldcady.com/blog/is-science-really-for-public-consumptionFirst off the answer is yes: it's a very good thing for scientific knowledge and the thrill of scientific exploration to be made generally available to all.  But that's not what I'm talking about.

I recently read <a href="http://arstechnica.com/science/2015/08/mit-claims-to-have-found-a-language-universal-that-ties-all-languages-together/">this</a> article about the supposed discovery of a language universal by researchers at MIT.  I was only able to read this news article and the original research paper's abstract, but knowing a little bit about linguistics and a lot about probability I was underwhelmed.  I don't want to say that it was a dud of a paper, but their conclusion seemed pretty obvious a priori, and it has almost nothing to do with the Chomsky-style language universals it was being touted as.  The paper is a classic example of work that is 20% incremental progress, 80% phds students trying desperately to publish something.

But I can only see that because I DO know about linguistics, probability, and the inner workings of academia; to outsiders this work would look like an actual discovery.  This misconception is encouraged by the journalists, who have every incentive to sell incremental growths in evidence as if they are breaking news.

This is a very innocuous case.  Do I need to mention the one, solitary research paper that found a link between vaccines and autism, and that was later completely discredited?  Or how about the economics paper about the dangers of high public debt, that was used as the basis for policy decisions and was <a href="
http://www.bloomberg.com/bw/articles/2013-04-18/faq-reinhart-rogoff-and-the-excel-error-that-changed-history">recently</a> found to contain an Excel error.  The list goes on.  And it is all the worse because these things discredit the scientific endeavor as a whole, giving fuel to those who reject established scientific truths.

So I propose that, in general, journalists and politicians should steer clear from delving into scientific papers.  They should instead focus on scientific *consensuses*, the invaluable knowledge we acquire when the scientific sausage is done being made.  There is a ton of room for debate on this point, but I've been thinking about it lately and would love some discussion.  Thoughts?

<![CDATA[Quick note: you don't need a Kindle]]>Tue, 05 Aug 2014 21:10:39 GMThttp://fieldcady.com/blog/quick-note-you-dont-need-a-kindleI've gotten some questions asking whether people who don't have a Kindle can read my book.  The answer is most definitely yes!  The Kindle App is free and there is a version for basically any device.  My own physical Kindle mostly just collects dust, but I am an avid user of the app on the Android phone.  You do need either a Kindle or the Kindle App to read the book, but IMHO the app is the next generation of reading anyway, so I encourage you to get it with or without What is Math.]]><![CDATA[Book is Published!]]>Mon, 04 Aug 2014 20:30:00 GMThttp://fieldcady.com/blog/book-is-publishedMy long-in-the-works book What is Math is now available on Kindle!  It contains prettymuch everything I have to say about math, cognition and language, as well as awesome historical context and personal anecdotes.  If you've enjoyed this blog or are interested in the human side of math, then I encourage you to check it out.

Having spent most of my life working with math in one form or another, I am convinced that curious people of all backgrounds could benefit from a novel take on the subject.  There are a lot of mis-conceptions out there, in everybody from laymen to professional math researchers.  Even if you don't end up agreeing with my thesis, the book covers a fascinating range of topics, and I think there will be something new and exciting for everyone.

As always I would love to hear any questions or feedback you might have.  Thank you all!

<![CDATA[Cool Book on Consciousnes]]>Fri, 04 Jul 2014 20:28:27 GMThttp://fieldcady.com/blog/cool-book-on-consciousnesIt turns out that something has been written about consciousness that is worth reading!  Most of what people say about the C word amounts to pseudo-philosophical mumbo-jumbo, but I've found a new book that addresses it as a serious scientific subject, with actual data to back it up (oh my!).  If you're curious to know more about the stuff between your ears, consider giving it a try.

Consciousness and the Brain (by Stanislas DeHaene, the mathematician-turned-neuroscientist whose other books taught me so much about how our brains process numbers) surveys a lot of research that has been done on the empirical phenomenon of consciousness.  A typical experiment is to flash words on a screen so quickly that subjects don't notice them, or just slowly enough that they get noticed, and scan the subjects' brain in each case.  It turns out that a huge amount of processing gets done below the threshold of conscious perception, but the processing stays within only a few brain regions.  Above the consciousness threshold though the whole brain starts to resonate with the idea, putting it into a condensed form that can be stored within working memory.

This is far from the last word on the subject, but it's exciting to see consciousness move from the realm of gibberish quackery to the point where we can answer substantive questions about it.  Another person who seems to be worth reading (DeHaene speaks highly of him, but I haven't followed up myself) is the philosopher Daniel Dennett.]]>
<![CDATA[Trip to Egypt!]]>Tue, 22 Apr 2014 07:26:10 GMThttp://fieldcady.com/blog/trip-to-egyptOk so maybe I just missed the memo on this one growing up, but the ancient Egyptian ruins used to be all in color, not bare stone.  Beautiful murals all over.  Back in their heyday they would have been like pastel kaleidoscopes out in the dessert.  I probably should have learned this like twenty years ago, but better late than never!  So cool!  It feels like the first time I learned that dinosaurs actually had feathers :)

But yes, I did just return from Egypt, the home of the oldest mathematical documents in the world (hey, I had to relate the trip to math in some way on this blog).  I climbed a pyramid, sailed down the Nile, saw Tutankhamen, and fell off a camel.  Fun times!

Egypt is going through a tough time right now, and I'm afraid it shows.  The tourism industry has been gutted, and desperate people are stumbling over each other to swindle, beg or price gouge the few foreigners to be found.  You have to be on your guard.  The streets are dirty, the people are poor, and your heart aches to see it.

At the same time though you are physically safe, and if *you* approach a random stranger (rather than talking to somebody who comes up to you), chances are they're just a normal, nice person, doing their best to get by.  If you're willing to look past the poverty this is actually an exciting time for Egypt, and I'm very optimistic about their future.  With the removal of their would-be theocratic dictator, a thoroughly-learned lesson about radical Islam (they LOATHE the Muslim Brotherhood now), and their upcoming elections, Egpyt could very well be on the cusp of a functioning, secular democracy.  With any luck, in a few years the insufferably long lines for every attraction will be back!

I went with my good friend Shawn Chen from Stanford - thanks for a great trip man!  Below are a couple choice pictures.
<![CDATA[Bill Nye and Ken Ham: the Missing Question]]>Tue, 11 Feb 2014 21:25:31 GMThttp://fieldcady.com/blog/bill-nye-and-ken-ham-the-missing-questionFor those who have been living under a rock, Bill Nye the Science Guy recently debated Ken Ham, the founder of the Creation Museum.  It was a really interesting debate, and I encourage everybody to watch it if you have time.  I was very impressed by Bill Nye, and even have respect for some of the points that Ken Ham made.

There is one thing, however, I think Bill missed.  At one point he asked Ken Ham point blank: what would it take for you to change your mind?  Ham responded that he is a Christian, and Creationism is inherent to that.  Ham feels that religious evidence trumps scientific evidence across the board.  He admits it openly, and honestly I can't blame him.  That position is not intellectually dishonest.

However, Ham doesn't just assert that Creation happened.  He goes further, and argues that the *scientific* evidence supports Creationism.  So my question for Ham would be, what evidence would cause him to admit that the *scientific* evidence is against Creationism?  He could look at that evidence and stick to his guns for religious reasons.  New evidence could always turn up later that supports him.  But biblical authority aside, could he ever admit that the available scientific evidence is against him?  If Bill had asked that question, then I think it would have revealed Ken Ham for the intellectual fraud that he is.

<![CDATA[In Praise of Cognitive Bias]]>Thu, 06 Feb 2014 18:33:34 GMThttp://fieldcady.com/blog/in-praise-of-cognitive-biasI've been listening to the Freakonomics podcast recently (since my cherished Revolutions podcast is currently on hiatus).  They talk about how confirmation bias skews psychology literature, how optimism sends us careening into manic binges, how our decisions are prettymuch always flawed.  We takes bad bets, and pass up good ones.  We are slaves to social pressures that we know to be irrational.  We focus myopically on metrics we can quantify, even when the numbers are bogus.

Ok, I get it.  Humans suck at rationality.  We're TERRIBLE at it.

We're so bad, in fact, that I would like to make an alternate suggestion.  Maybe rationality is the wrong thing to measure.  I mean, if you're trying to study Richard Sherman, you don't ask about his freethrow rate or his best 10k time.  You ask about his football performance, because he's a football machine.

Human beings are terrible at being vulcans, but that's because evolution didn't design us to be.  It designed us to be something else, to solve some different problems.  And presumably we're very good at solving them, since the species has stayed alive this long.  What are those problems?  How do our minds go about solving them?  Rationality is the wrong metric to use for understanding human behavior.  And just maybe, rationality it the wrong thing to shoot for in the first place.

My favorite example of logic failing is in game theory.  The "logical" strategy is to compute a nash equillibrium, but that's usually computationally intractable.  If you try to play logically you'll get trounced by the guy who plays based on good heuristics.  Maybe the meta-logical strategy is to recognize this fact, but you have to travel pretty far down the path of meta-logic before you stop sucking.  Better to start the whole enterprise off based on good heuristics, and refine them is they suck.

So this blog post is sort of a rant, sort of a wanna-be call to action.  I'd love to see a good description of what the mind does, how it does it, and why that works.  Whether that's rational is the wrong question.]]>
<![CDATA[Why Python Rocks for Data Science]]>Sun, 24 Nov 2013 18:09:05 GMThttp://fieldcady.com/blog/why-python-rocks-for-data-scienceApologies to non-computer programmers in the audience!  I gave a webinar this last week, and just wanted to talk a little shop about it.

I pay my bills working as a "data scientist".  Basically that means I do traditional data analytics, but I do it in wild-wild-west situations that aren't standard for a statistician.  Maybe the data isn't in a form that traditional tools can read it, and I need to write a custom parser in some low-level language.  Or the dataset might be too large to fit on one computer.  Stuff like that.  And my programming language of choice for this task is (almost) always Python.

My webinar goes over a lot of specific information, but it reminded me of one overarching theme.  For any task X there is a tool that is better than Python.  In some cases quite a bit better.  But also for any task X, Python is good enough to do it.  And that flexibility is the real win with Python.

The other cool thing that I talked a little bit about is how some of Python's numerical libraries work under-the-hood.  They're mostly all based on NumPy arrays, which are super efficient to store and fast to do arithmetic with.  These arrays are the data structures they use for machine learning, numerical computation and the like.  I'd love to give a webinar called "Python Shedding its Skin" about how the language is implemented, but that's a talk for another day! 

<![CDATA[Neanderthal Language]]>Sun, 22 Sep 2013 20:18:38 GMThttp://fieldcady.com/blog/neanderthal-languageThere's a lot of controversy about whether or not Neanderthals had language.  In fact I almost shouldn't call it controversy, since everybody agrees that there's like zero evidence.  But there are at least a lot of reasonable, evidence-based speculations, as described in a recent Slate article:  http://www.slate.com/blogs/quora/2013/09/22/how_complex_was_neanderthal_speech.html

First some things that I expect we can all agree on.  There are a LOT of ingredients that go into human speech, and they probably didn't all evolve at the same time.  There is a continuum of traits, presumably of different cognitive complexity and different stages of evolution.  On one end we have the most basic things, like forming sounds and the cadence of speech.  Then there is vocabulary - learned combinations of sounds that have specific meanings.  Moving on we have morphology, like how to pluralize words, and some simple rules for combining them.  "Me Tarzan, you Jane" is basically a proto-language, that would have been good enough for most hunter-gatherer needs.  Only at the highest levels of complexity do we see things like subordinate clauses and recursion.  Humans are the only living creatures that are known to have made any progress along this continuum, but there must be a "missing link" between what we have and animal sounds. 

So the question "did Neanderthals have language" is an oversimplification.  It's not a binary answer; you can have vocabulary without recursion.  The real question is where Neaderthals were on the language continuum.  Could they have written Shakespeare?  Did they top out at "me Tarzan, you Jane"?  Were there any words at all?  The "old guard" school of thought in linguistics (to which I mostly subscribe) is that they had a proto-language, but none of the more complicated structures like recursion.  Other researchers have argued that their culture and very-close-to-human biology suggests that they had the whole deal.  There is nothing on the horizon that could really settle this question, but it's an interesting one to pose.]]>