Just another example of amusing agent rationalizations

edited February 2008 in Seattle Real Estate
So how can it always be a good time to buy?
So how can it always be a good time to buy? Because a house is more than a down payment, it's a home. It's more than a 30-year mortgage, it's a long-term investment. Gig Harbor is unique. It is close to Boeing, Amazon, Microsoft, and more. It is beautiful, surrounded by water and has a very mild climate relative to most places in the country. No floods, no fires, no earthquakes or tornadoes. People want to live here. It doesn't matter what Florida's or Nevada's real estate market is doing. It doesn't matter that the economic experts (who own 4 homes each), tell you not to buy even though they earned their fortune partially through real estate. What matters is you. If you're ready to buy a house, now is a great time. If you're not ready, don't buy!
That quote follows a lengthy "story" in which a young couple misses out on deal after deal because they believed the silly notion that it's better not to buy an overpriced, depreciating asset. How silly they were.

It amuses me when real estate agents go to such lengths to concoct fanciful stories like this.

Comments

  • From http://en.wikipedia.org/wiki/Nassim_Taleb:
    Taleb believes that most people ignore "black swans" because we are more comfortable seeing the world as something structured, ordinary, and comprehensible. Taleb calls this blindness the Platonic fallacy, and argues that it leads to three distortions:

    Narrative fallacy: creating a story post-hoc so that an event will seem to have a cause.
    Ludic fallacy: believing that the structured randomness found in games resembles the unstructured randomness found in life. Taleb faults random walk models and other inspirations of modern probability theory for this inadequacy.
    Statistical regress fallacy: believing that the probability of future events is predictable by examining occurrences of past events.

    More commentary: http://www.conversationagent.com/2007/0 ... ive_f.html
    We like stories, we like to summarize, and we like to simplify, that is to reduce the dimension of things. In Taleb's words, "the fallacy is associated with our vulnerability to overinterpretation and our predilection for compact stories over raw truths."
  • Wait - now hold the phone here.

    I live 0.8 miles from the Redmond West Campus of MS. I can walk there in less than 15 minutes. THAT is close to Microsoft.

    Now how exactly is Gig Harbor close to Microsoft? Do they have an office over there that I don't know about? Does a four-hour daily commute sound like it's close?

    Somebody needs to call the person who wrote that ad and set them straight. There are lots of places in India that are close to Microsoft too! :)
  • Alan wrote:
    We like stories, we like to summarize, and we like to simplify, that is to reduce the dimension of things. In Taleb's words, "the fallacy is associated with our vulnerability to overinterpretation and our predilection for compact stories over raw truths."

    I like the black swan theory in general, but one need apply it with measured caution. The total of scientific advancement for all of humanity is based on the idea that one can analyze and reduce problems to manageable number of dimensions, and then solve said problem. This is not a bad thing! In fact, study machine learning and you find that one of the most essential facets of intelligences is to not over-fit your model of the world based on the past data you've seen.

    Let me rephrase that in a more digestible way. Assume I do not know what a dog is. If I see a poodle and am told it is a dog, I can form either a very specific or a very general model of what "dogness" is. If I form too general of a model (furry, four legs) I will see a cat (four legs and furry) and also assume it's a dog. But if I classify too narrowly, I will see a terrier and have no clue what it is.

    In the real world, it's generally better to have a bad guess (it's a dog!) than no guess. So, we create models of the world which are over simplified, because we haven't seen every 4-legged furry animal. If all I know is dog, and I see a tiger I might wrongly assume it's friendly. People do this all the time when they adopt dangerous animals.

    This is all that a black swan is. It's just something outside the observed norm, and so it's unexpected. It's good to remind people that just because we haven't seen X doesn't mean it can't happen. But it's also generally not worth obsessing over.
  • Come now, I'm sure you can drive from Gig Harbor to Microsoft in 15 minutes... (does anyone at MS commute from Gig Harbor, I wonder).

    I also like the bit about no floods or earthquakes...
  • I studied AI for five years in grad school. I understand the machine learning viewpoint. One of the most easily made mistakes that results in overfitting is to use data that is not independent. If you read the newspaper and it says that housing always goes up and then you talk to your neighbor and he says that housing always goes up, you might think that is two different pieces of information and increase your confidence that housing always goes up. But your neighbor got his information from the newpaper so you are actually giving double credit to the newspaper. What is worse is that next time you talk to your neighbor you tell him that you agree housing prices always go up. That increases his confidence -- but now he is counting the paper three times (twice from his own reading and once from yours).

    AI (and natural intelligence for that matter) is largely about choosing the right biases that are useful for your domain. Taleb claims that the bias humans naturally have for narratives is strong and that you need to be careful that it is not exploited by others. It doesn't mean that narratives aren't useful, just that they aren't as useful as you think they are.

    Of course, I used a narrative to make my point. If you believe that narrative you should might make the inductive step that the seattle bubble is an echo chamber where we all continually reinforce our own beliefs and maybe there isn't a bubble in real estate after all. But since you have a natual bias to give too much weight to narratives, you should probably pay less attention to my story than you initially did.
  • My IPhone informs me that it is currently a 1 hour and 24 minute drive from MS to Gig Harbor with traffic (52 miles, BTW). Of course, it's still early in the evening commute...
  • Good response Alan. And I agree that narrative is a powerful function in human decision making.

    Reading that gun violence is up 500% might make you shake your head. But having a friend tell you how they were shot at will probably cause you to change your behavior. So I agree 100% as far as that's concerned.

    I haven't read the Black Swan book, my worry was that people would read a synopsis of it, hear that big events are often not predicted, and then come to the conclusion that we need to always be on the lookout for these black swans. In reality, there's a difference between rational caution and paranoia, and a lot of people I know tend to prefer paranoia.
  • I am in the process of reading it. It is a fun and easy read. Taleb uses narratives to make his points and then tells you that it is easy to make mistakes from narratives.

    I like to use "the water tank" problem as an example of what he is talking about.

    Consider a water tank with a constant flow of water entering it and a drain where the flow of water out is proportional to the level of the tank. You have a bobber that measures the water level, but the spashing and waves caused by the entering water gives you a noisy signal. We model the noise with a Gaussian curve.

    If you do a qualitative analysis of this system, you find that one of three things can happen:
    - The water reaches a constant level below the top of the tank as the flow in becomes equal to the flow out.
    - The water overflows the tank.
    - The water reaches a constant level right at the top of the tank (which is kind of a corner case so I'm going to ignore it for the rest of this discussion).

    One can measure the water level of the tank and make predictions about the next time step. If the tank does not have a "funny" shape you can even predict fairly early what the steady state of the tank will be. However, if you do not know the capacity of the tank, you have no way of prediction through observation of the water level, when the tank will overflow. You can build a highly predictive exponentially decaying curve with Gaussian noise. You get these fantastic results right up until the point when the tank overflows. Even if you know that the tank might overflow in theory, reading the water level tells you nothing about the actual capacity. And of course, by the time it happens, it is too late. The tank overflowing is a Black Swan.

    But maybe you know the capacity. There are still other things you may not know. Maybe the tank isn't strong enough to hold all the water that could fit in it. Maybe its supports aren't strong enough. Maybe it is near a busy intersection and a car crashes into it one day. The guy making predictions from the model says it is impossible. According to the error estimates it happens once in a million years. And yet it happened twice last month.

    I think black swans boil down to bad models, but building good models is nearly impossible. How do you account for things that you don't know that you don't know?
  • Alan wrote:
    I think black swans boil down to bad models, but building good models is nearly impossible. How do you account for things that you don't know that you don't know?

    That was a very interesting example. A tank overflowing because you don't know it's shape or capacity is a bad (or at least incomplete) model. It will always overflow, the model does not predict that, hence the model is bad.

    The example of a car crash however, occurs in a model space so huge as to be intractable. A model is (by definition?) developed by removing aspects which are complicating and unlikely to significantly affect results. To include everything in your model makes it a super high fidelity simulation. The highest fidelity simulation being that you actually get the tank, put it by a road, and start adding water.

    In the example you explained, the flaw is not in the model, but in the interpretation that the model does not predict an overflow and the conclusion that overflow is impossible.

    If that's the lesson, I find it sad that we even need to teach people this. As a fellow computer science guy, that models are simplifications of real life is so fundamental as to be axiomatic.
  • I think one of the lessons is that you don't want to leverage billions of dollars on a model that you know is incomplete in some way that you do not understand.
  • "If that's the lesson, I find it sad that we even need to teach people this. As a fellow computer science guy, that models are simplifications of real life is so fundamental as to be axiomatic."

    A standard problem with economic models is that they tend to use nice, smooth, tractable, twice-differentiable functions - so they aren't very good at predicting when something is going to fall off a cliff.
  • kpom wrote:
    A standard problem with economic models is that they tend to use nice, smooth, tractable, twice-differentiable functions - so they aren't very good at predicting when something is going to fall off a cliff.

    I guess I am just curious what the better alternative is. Is it better to use a model which smooths out some things, or is it better to throw up our hands and say "We can't figure this out, so let's not make any decisions." I'm with everyone that you can't just do what a model says if it appears to be irrational. There's something to be said about using the best model we have, but watching for cliffs, and changing the model if we notice we're heading for one.
  • The point of my example, and of Taleb's book, is that you cannot see the cliff by using past data. By the time you see the cliff you are already falling.
  • Alan wrote:
    The point of my example, and of Taleb's book, is that you cannot see the cliff by using past data. By the time you see the cliff you are already falling.

    OK, got it. But what should I use to see the cliff instead? My brain might make 'better' models than a computer, but it's still a model. A hypothesis like "real estate never loses value" some how falls out of this model. So again, if I cannot build a model that recognizes the cliff, and if my brain is incapable of modeling the cliff. How do I decide what to do? I'm struggling to grasp the practical value of this message, but I do want to be enlightened.
  • kpom wrote:
    Come now, I'm sure you can drive from Gig Harbor to Microsoft in 15 minutes... (does anyone at MS commute from Gig Harbor, I wonder).

    I also like the bit about no floods or earthquakes...

    I'm sure there's one or two crazy people who make that commute. They probably work from noon until 10 pm though.

    The no floods/earthquakes part was great. They forgot to mention that we don't have active volcanoes, either. :lol:
  • Alan wrote:
    How do you account for things that you don't know that you don't know?

    Ask Donald Rumsfeld? :lol:


    "Reports that say that something hasn't happened are always interesting to me, because as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns -- the ones we don't know we don't know."
  • I've read Taleb's first book, waiting for Black Swan soft cover to come out...

    The fact that we have to resort to statistics just means that there are lots of stuff we don't know about a system so we use a large data pool to give us an average idea of what the system is doing. Consider quantum machanics where you decribe electrons as probability curve or in thermodynamics where you don't track each individual molecule but resort to statistical mechanics to tell you the average which is what pressure and temperature are. Because these are physical systems so they obey certain invariant property 99.9999% of time so using such measure in engineering is acceptable. Even in science and engineering, problem often must be simplified in order to have a solution (okay I mean analytically e.g. we can only solve two body problems analytically anything higher we must resort to numerical technics) otherwise the problem would be too complex to solve. But by simplifying you are eliminating certain information but in engineering we say approximation is good enough (most problems that are solvable are linear meaning first order, anything of higher order are difficult to impossible to solve--consider any high order PDEs so the higher order effects if minimum are lopped off so that we would have progress in the world). Again these simplification works because the invariant physical systems, the obey universal law. When you use these technics to model financial and economic topics which are "man-made" that includes people's emotions, they will not do a good job and thus subject to blow-ups......

    In terms of the water tank example, you would model the structural portion of the tank separately from the flowing portion and if time and money permits you could subject to vibration etc etc. Ther eis not one model that encompasses both the flowing and structural part you'd maybe calculate one and use that as input into the other one....
  • The no floods/earthquakes part was great. They forgot to mention that we don't have active volcanoes, either. :lol:
    Including one erupting volcano.
Sign In or Register to comment.