- An events website (Meetup, Facebook, EventFinda, and every other event content aggregator should steal this) whereby once users sign up, looks at their device's location data (if the user gives the application permission, and location services are active) and calculates when to send the person a reminder based on how long it will take them to get to the venue from where they are using public transport (via a Google or Waze API for the journey planner feature).
This software just brings existing data together in a way that's helpful to users with more interesting things to think about than when to do stuff.
- Snack Roulette: an app that orders food from a randomised local vendor. Delivery, pick up, and eat out options are available. The user simply authenticates their credit card details, sets their maximum budget for the meal and number of eaters, is informed how many options there are, hits confirm, and then receives the result which is automatically billed. The delivery option comes with the surprise of seeing who comes to the door and with what.
- Weight guesser: an app that estimates a person's bodyweight when you point your phone's camera at your reflection. Users train (or try and screw with) the app's machine learning by entering the correct value, if known.
- Memory fibres in a necktie, that remembers it's folds once you train it.
- A collapsible drinking cup.
- Moisture wicking shoelaces, to dry your shoes while you wear them.
- A lexicon of new words to accurately describe flavour and flavour combinations.
- Modular language components for easy general production of new words that are self-describing.
- A shirt with an invisible wax coating on some of the fibres so that when you sweat, the darkening of the unwaxed fibres shows an image.
- Psychometric testing data analysis to correlate project success rates and ROI with certain personality traits, to quantitatively measure company values.
- An app that uses the camera and optical character recognition to parse and save the information on a business card into your address book so you only have to photograph the card, not type anything.
- An attachment for table legs that doubles as a decorative floor protector and a precision height adjuster. Your antique wooden table never need wobble, no matter how warped the wood gets over time.
- An app that displays a cell network coverage heat map: you can see how many bars you'll have at any place before you go there.
- An API for the cellphone coverage heatmap app above which serves data to your route-finder app (like Waze) and warns you before your journey that there will be lost or diminished coverage where you're going.
- Measure the minimum distance at which people can sit beside each other facing the same way without being able to legibly see each others mobile device screen, and mark that distance as "privacy distance" at train stations and other public places so it's easy for people to be systematically courteous of each other.
- Two positional sensors attached (or inbuilt) to a monitor that enable monitors to triangulate their positions in relation to each other and automatically configure seamless mouse cursor movement between screens that aligns with the monitors' actual positions in space.
- An app that let's you and another user have on your screen a frame showing what's on the other person's screen (minus the frame!).
Ideal for showing loved ones (or researchers) your screen and content habits.
- A background app that lets you record all activity on your phone into a compressed video file that you can refer to later or access remotely, to provide an evidence-based way to know if your phone has been messed with.
- A background app that lets you record from both cameras into a compressed video file. An ideal way to document the life of your double-chin.
- Public dashcam app that lets you put a live feed of what your car's dashboard mounted camera sees onto the Internet. With an app that lets people view it.
- An app like Flightradar24 but which shows all planes, trains, buses, ferries, and other publicly transmitting vehicles, anywhere in real time.
An additional feature that lets you tap any vehicle and see a dashcam/cockpit live video feed, if available.
- A company with total transparency, automatically Tweeting or publishing business insights as they’re derived from data.
- A machine-learning behaviour analysis app that learns via the accelerometer, tilt, and cameras the minutiae of the user’s phone position when in use or in your pocket or bag, and automates features like locking, launching apps, and turning apps on and off according to your usual behaviour.
- GPS for buses - takes users’ positional data and use of journey-planning software and gives bus drivers a dynamic mini-map of nearby bus users who may be likely to board the bus. This feature reduces missed bus trips for users by enabling more informed driver discretion on whether or not they can and should wait for close-running passengers to board before leaving each stop.
- A cochlear insert that voices positive thoughts to the user. Can be set off, to a frequency, or reactive contextual mode using software that determines – via Bluetooth to the mobile device – location, time of day, and inferred activity.
- Shirt and trouser pockets tailored to the exact measurements of your device and wallet and keys.
- A mass producable, wearable electrocardiogram hardware plugin for smartphones.
- Mechanically compactable public trash cans that let a helpful populace fit more trash in.
- Data driven ubran planning in industrial parks to plan food vendor availability and revenue by evaluating the number of personnel per area, food suprlus margins, and anonymised individual user habits to feed everyone, minimise waste, and maximise consumer satisfaction.
- GPS that factors in the weather by correlating past weather conditions with past traffic patterns, and outputs driving advice per driver factoring in advice it gives to all drivers, to mastermind the traffic network and avoid replacing organic traffic congestion with app-coordinated traffic congestion.
- GPS that that lets you mark and record intersections, roads, and suburbs that you wish to avoid so you never get routed through a place you don’t want to go.
Here are 30 ideas, for free, fun, or profit. Basically for you to use however you want.
ADJECTIVE; equating elevation with worth
‘an acrocentric description of the surgeon's skill is that she is at the top of her profession'
This article is about our linguistic trait, in English, of arbitrarily viewing the concept of elevation to be inherently synonymous with positivity (including the opposite, where descent is made synonymous with negativity).
We speak as though 'Up' equals 'Good'...
“To be thought of highly.” “High quality.” "Elevated status." "Higher public profile." "High prices." “Highly recommended.” "Above and beyond the line of duty." "Having one over on the other guy." "Top scientists." "Highest honour." “To be at the pinnacle of ones career.” “To heighten the risk.” “Heightened senses.” “Upwardly mobile.” “High salary.” "The top of her field." “The top of his game.” "Highest ranked." "Upper management." "Top of the line." “The upper end of the quality spectrum.” "High socioeconomic status." "Get over it." "Highly recommended." “To climb the ladder of success.” "Up and coming." "The highest honour I can bestow." “To take the high ground on a matter.” "High marks." "A higher life." “To have peaked in life.” “To be held in uppermost esteem.” “Lofty ambition.” “Uplifting news.” “A high-end establishment.” “To lift someone up.” “Head of department.” “Elevated status.” “To climb the ladder of success.” “High on life.” “Top professionals.” “The upper class.” “Reaching the upper limit.” “Lofty ambition.” Even "Top of the food chain" and "Apex predator" use height to describe the success of an entire species!
The concept is implicit in popular inspirational quotes: "Your attitude determines your altitude."
The are countless depictions of Heaven or Paradise as on a plane, physical or spiritual or both located above the plane of the Earth’s surface and lower atmosphere: “The Lord above.” “The man upstairs.” “Heavens above.”
We also see very recent idioms adopting the old concept of acrocentricity into new concepts. Like the idea of “trolling at a high level.”
Despite their different subjects, all these examples share the attribute of using language that equates elevation with worth. (There are loads more, and I’d love to see examples you’ve used or encountered. Tag @autonomike on Twitter and use #acrocentricity and I'll add them to this article.)
...and as though 'Down' equals 'Bad'
We say things like this too:
"Bottom feeder." “That’s low.” “Downtrodden.” “You have to learn to crawl before you walk.” and even the term “To put someone down.” “To fall from grace.” “To be taken down a peg.” “To be laid low.” “Scraping the bottom of the barrel.” “To be “tread on” or “stepped over”. And of course depictions abound of Hell as subterranean, “down there”, below our feet, ever presently waiting for us to fail and fall. “It’s all downhill from here,” indicates a “descent” into an inferior circumstance, even despite the considerable biophysical realities of moving downhill consuming less energy than moving uphill.
There are many situations where elevation is not good – or downright dangerous. The fear of heights is one common example. The actual danger of heights is another. Yet humans can also have a rush of adrenaline and dopamine in response to such danger – a physical acrocentric response!
As much as the English speaking language adheres to acrocentric concepts, we also say things that seem to contradict the established equation of height with worth. We say "he is scum" – scum being the uppermost layer on a body of liquid. Or “she has her head in the clouds” which is sometimes used to criticise a person for ambitious ideals. In contrast "down to Earth" is considered a virtue, to be grounded, realistic, and dependable. To be “deep” is to be profound, but in any physical sense the only things characterised by their depth are deep in a downward or laterally downward direction: an ocean, a cave, a mine shaft, a crevice, a tunnel, a chest freezer.
The non-universality of acrocentricity and upness
In our cosmos of space, there’s no universal “up”. There is a global “up” on a celestial body of any size, that simply means “outward, away from the centre,” but even that is self-contradictory; a person standing in Stockholm, and one in Auckland have completely opposite definitions as to what direction "up" actually is, since they each would be pointing out from different sides of the planet. The global definition of "up" is really just a local definition that indicates a direction outward from a sphere such a sea or ground level. But in a universal sense that considers all of space, “up” is no more than a subjective perceptual concept developed by terrestrial beings with their eyes on the stars. Outward from the centre of the universe isn’t “up”; that would be “outward from the centre of the universe.” The conclusion here is that “up” only exists in the context of orb-dwelling life such as ours.
How can we explain our innate trait to equate 'Up' with 'Good'?
One way to justify the logic of the acrocentricity phenomenon is by considering a bar graph, such as one used to visualise stock price, where our y axis is monetary value, and x is time. If a stock is rising, it is becoming more valuable – if we own that stock then its elevation on the graph is subjectively positive to us since it is a profitable situation for us the owner of the stock. Easy? But if the stock is not owned, then elevation is negative because it denotes a profit opportunity on which one is missing out.
Regardless of whose viewpoint is used, an increasing stock price raises the point of value on the graph to correspond with a greater number in terms of its financial value. That word choice may be relevant to explaining acrocentricity: "greatness" is inherently associated with both size and positivity in the English-speaking parts of the world. The association of these two definitions for the word "great", although illogical in its lack of absoluteness (consider: you probably wouldn't describe a tumour as "becoming greater" as it grew bigger), nevertheless provides a simple etymological possibility for explaining our now deeply entrenched sense of acrocentricity:
"Good means great" > "Great" means "big" > "big" means "tall" > "tall" means "high" > so "high" therefore means "good".
In numerical terms it’s a bit clearer. A graph, be it of share prices or percentages or other values, depicts height (or 'upness') as being further along the y axis from zero, with zero marking the bisection with the x axis and the integer marking the y axis’s end at its furthest point from the bisection. To our human eyes, the y axis goes up and down. So we describe it that way: 6 is a “higher” number than 5 on that vertical axis. In fact 6 is a “greater” number than 5, which means bigger, which means taller, which means higher. Isn’t it interesting how we don’t call 6 a taller number than 5, but we would call it bigger, higher, or greater?
The arbitrariness of up
Set aside humans' arbitrary use of the arbitrary concept of elevation to denote worth, height is actually an arbitrary concept itself. It is purely contextual, and relies on having a very specific shape of a very large size to mean anything at all: specifically a spheroid of such a size that it can be considered a celestial body. "Up" only makes sense, as a direction, when it means "outward from the centre of an environmental spheroid." "Height" only makes sense as a position in context of the same: "Further outward than the subject from the centre of an environmental spheroid." So on a planet, the tape measure of a vertically standing pole is called its “height”. The same dimensional measure of the same object floating in space would be called its “length”.
'Up' as we know it only exists on spherical bodies; not cubes, cones, or hexagonal prisms. Since there's no height without an up against which to measure it, height too is a property of spherical bodies.
Since "up" means "out," and "higher" means "further out," what does this logical definition do to acrocentricity? Does “out” mean good? Does distance from a centre denote worth? Are nuclei bad, and the worth of orbits measured by their distance from them? Not generally. Outness, despite sharing its exact meaning with upness, doesn’t stretch to include worth. (Unless maybe you consider the phrase “that’s far out, man,” to be high praise.)
We say these things every day. But why do we say them? Why do we instinctively know what it means when we they say "high" and we hear "good"?
The concept of good being up and bad being down indicates that it’s a struggle, and a risk to advance compared to regress. Regression is often as easy as inaction or action: doing the wrong thing, or not doing anything when something needs to be done. This relates to the perpetual embrace of gravity, whereby for many of us our entire lives teach us that moving upward is a matter of concerted and coordinated physical effort which must be conducted just right, whereas moving downward can be as simple as letting go – and perhaps even fatal, if we were to fall too far.
PhD candidate and ecologist Joshua Thoresen considered the matter of acrocentricity during development of this article, and quoted to me Robert Macfarlane’s Mountains of the Mind:
The anthropomorphic view
Homo-sapiens as a species is highly vision-reliant. Vision, for those of us fortunate to have it, is our primary sense, our primary means of obtaining information about the world: more so even than television, Google, any book, any map, any diagram. Indeed vision is what enables information to be obtained from these and many other sources. Including our environment.
Never mind internet searches, what about before all this visual technology and all these screens? What about back when homo-sapiens was a canny forebrained ape making its first ventures into the realm of technological advancement with a flat rock and pointed stick? Vision was all important then. Our eyes, at the front of our ape skulls just like our cousin species, are well adapted for hunting. Our night vision is excellent. Our prehistoric communities stood safe thanks to the watchful eyes of sentries who could spot any major threat and raise the alarm. Those sentries would have found that an elevated position -- atop a boulder, a hill, up a tree – afforded a superior view and the ability to see further than the mere ground allowed.
Specifically, spatial elevation affords humans increased visual information.
The further we are upward from the plane supporting us, the further around that plane we can see. The distance it affords allows for there to be more molecules for photons to bounce off and slam directly into the photoreceptor cells of our retinas at the back of our eyes. The more height we gain, the more photons go into the eye, and the more information there is for the brain to make sense and use of. Information is inherently valuable to humans. (I don’t need to tell you that! Look at your greedy eye-brain right now guzzling up all this delicious information.) Through our vision, a little height translates to greater volumes of visual information. Go high enough and you’ll see fully half of the planet!
In practical terms for the development of a species, the principle of ‘elevation = information’ has proven its value over millennia. Be it a forager surveying for a prime picking spot, a hunter searching for prey, a hiker trying to gain their bearings, a lookout in a crow’s nest scanning the horizon, a fort outpost overlooking hostile territory, a firewatch tower placed to monitor bush fire activity, no matter the application the equation remains: ‘Elevation = information’.
If you need to see stuff, higher is most definitely better.
The principle holds equally true today. You can test it right now by looking up from the device near your face to survey your environ. Behold, a slightly broader view of your environment; a larger span of visually-acquired information gained through the simple act of raising your eyes or your head or both at once.
Could this universal principle be the basis for acrocentrcity? I’m not claiming it is, because that would at least require a venture into other languages, which is the topic of a future article. But it does seem likely.
We should note here that in English, acrocentricity is independent of the visual aspect this explanation relies on. We don’t say “at the highest visual vantage point of ones career.” We say “at the height of ones career.” Perhaps vision is implicit. Or perhaps the principle of ‘elevation = information’ is a separate one to the linguistic phenomenon of acrocentricity. Make up your own mind.
Is acrocentricity healthy? Is it safe?
As a communication form, acrocentricity is an efficient language tool. Think how much easier it is to rely on “high” to convey a sense of worthiness, instead of actually describing how worthy you think the subject is and opening yourself up to semantic discussions where you’re asked to substantiate that evaluation of worth.
Arbitrariness and resultant objective silliness aside, acrocentricity is a part of our language that there seems little point in wilfully trying to change. In my observation, we haven’t taken it to a harmful extreme: we don’t assume taller people have more worthy brains than shorter people, even though the brains and eyes of tall people are indeed “higher” than the rest of us. Most English speakers instinctively know the exceptions to acrocentricity, and when to apply them to what we say and the interpretation of what we hear.
Acrocentricity is here to stay. I haven’t finished plumbing ancient texts for instances of it, but at this stage the hypothesis is that the phenomenon is very old.
You might enjoy making a game of spotting acrocentricity when you come across it, or even pointing it out when you hear others discuss their "high" salary or their "elevated" social status. You might even challenge them to say more explicitly what they mean using language that doesn’t connotatively try to convey a sense of worth when all the speaker is really saying is how they personally feel about the subject. Insisting that ones organisation is of higher repute than ones competitors just means that the person thinks it has a better reputation than their competitors: acrocentric boasting offers no objective metric whatsoever. If you do point it out or challenge it when encountered, you might see people struggle to use denotative language, and resent you for putting them to cognitive effort they had tried to dodge by using such easy language. It is, after all, a phenomenon we English-speakers learn from the very outset.
Perhaps that’s the value of acrocentricity: an easy linguistic aid to sell our views on matters we don’t know how to describe in denotative terms.
It can be useful and interesting to be aware of, but there's really no need to get high and mighty about it.
Thanks for learning!
How do we seek to know the objective truth about a subject? Why, we seek evidence! We crave sources of information we can trust and therefore accept as the basis of the formation of new knowledge, opinions, beliefs, and convictions.
We want that evidence to be empirical! So we form protocols and controls and experiment to produce empirical evidence. Or we have others do it, and we learn from their work because we trust that they did it well and can check that to our satisfaction in their published literature, our peer review processes, and the credibility of the journals that take them, and the others that replicate their results.
Sometimes the empiricality of evidence gets questioned, by ourselves or by others.
"The researchers could have done this extra thing,” or “They should have controlled for this variable of interest to me."
The criticism centres on a lack of specificity in the evidence manufacturing process. Fair enough! Shouldn’t we be completely specific when manufacturing new evidence?
Specificity, after all, is the very basis of empiricality.
But the fact is (or is it a fact? It's observable and testable, but is it pure objective truth?) that no matter how exhaustively specific you are, you can always be more specific.
Test my claim if you like: Find or write any definition that you feel is an example of ultimate specificity, show it to me, and then concede the point when you see me make it more specific.
Even when you produce a definition that is as specific as you know how to be, that doesn't mean it is the most specific it's possible to be. Just because you’ve produced a definition that’s more specific than any other definition ever to exist, also doesn’t mean it is the most specific it’s possible to be – it’s merely the most specific definition that has ever existed so far.
Yet you have to use something for your evidence production. You can’t plumb the infinity of specificity for the rest of time to reach a theoretical endpoint before you begin your process of producing evidence. You simply reach a point of specificity that is adequate for you (or your stakeholders) and you move the work ahead, produce the evidence, and share the knowledge.
That’s an arbitrary point to reach. And it's determined purely by how satisfied we feel about the level of specificity we decide to use.
Because of our inability to achieve ultimate specificity, evidence therefore can never be truly, objectively, empirical. It can only ever be empirical enough for an individual to accept and choose to consume.
The point is there's no bedrock of specificity; and therefore there is no ultimate form of empiricality.
Evidence is not either "empirical" or "nonempirical". It's only, ever, "less empirical" or "more empirical" in relation to other evidence.
All evidence we produce and consume and base our opinions and beliefs and convictions on is done by drawing our own arbitrary line in the infinite shifting sands of empiricality.
We each make the choice to accept evidence and to form opinions, beliefs, and convictions based on our own completely arbitrary threshold of acceptable empiricality.
Sometimes we sneer at others for having a threshold lower than ours, and polluting their beliefs with low quality information, or scoff at those with higher thresholds at how they damage their living experience with their closed mindedness.
That's embarrassing, because all our thresholds are equally arbitrary (and they change all the time, depending on how we feel and how badly we want to accept a piece of evidence).
Next time you learn something, take a look at how empirical your source was. If it was less empirical than other sources you learned from, ask yourself why.
What you learn about yourself in the process may not be empirical, but it might be useful to you.
And what better purpose could knowledge ever serve than that?
Thanks for learning!
1. A bill is written and a case is made defining the expected benefits to society of the proposed law change, including specific metrics for measuring efficacy. Current data defining the problem the bill intends to fix is also provided. This is put before the law approvers to implement or reject.
2. The bill is passed and becomes law for a provisional period determined by the approvers.
3. The law change is communicated to the populace.
4. Data is accumulated using the stated metrics for the provisional period.
5. At the end of the provisional period the law is evaluated by the approvers. Did it measurably improve society in the manner expected, or did it make it worse?
6. The law is moved from provisional to permanent status, or is removed.
7. The result and the data on which it is based are communicated to the populace.
This post is to thank Concept Frontier readers for coming to my website and learning what I have to offer. Here are 30 new ideas for you to do whatever you want with.
A thought-prompt generator app and website. Hit a button, get something to think about. If it's not interesting, hit it again.
Thanks for learning!
Humans have a lot of defects. The Blame Reflex, and our innate "Us vs Them” pathology are two big ones.
Our defects lead us to behaviours that aren't in civilisation's best interest, or even our own best interest. No single human's behaviour has been, or ever will be perfect. Every one of us behaves badly sometimes, as you might have noticed.
Yet civilisation is improving. How is this possible?
Here's one explanation.
If you applied these 2 attributes – 'Good for the individual human', and 'Good for civilisation' – to all human behaviours in a big list, you'd wind up with four types of behaviour:
1. We agree on basic logic
Bad for everybody (B) and Good for everybody (G) are easy. We all agree behaviours that are good for everyone should be maximised, and behaviours that are bad for everyone should be eliminated. That’s a consensus, and a fundamental factor in civilisation’s improvement. But it’s not the only factor.
2. Human productivity increases over time
Despite our innate defects, humans also have innate gifts. One of them is the compulsion to create. We create things, and we share them, and that's all wealth is: productivity. Human wealth production increases over time. Now, we're not as good at sharing that wealth as we are at creating it (take a look at income inequality) but we create wealth at an increasing rate nevertheless. Creating this wealth of new resources is a (G) behaviour (even though it has side effects we need to address, like the effect our prolific productivity has on our climate).
Our productivity creates not just goods and services, but tools that offer us opportunities to do entirely new productive behaviours. Consider how readily available computer coding is now, compared to its non-existence in the 1916's. Our productivity actually creates new (G) behaviours, and it happens all the time.
The effect of this trend to create more (G) behaviours is that there is an increasing ratio of behaviours that are good for everybody, and a much slower increase in the number of behaviours that are bad for everybody (B) – because nobody is busy thinking up new ways to do things that harm us all. (Although sometimes we get new (B)s as side effects of our productivity, like in that climate change point above).
This trend is huge. Our increasing ratio of (G) to (B) behaviours is another fundamental way in which civilisation improves inexorably. But that's not all!
3. The battle of “Me vs We” has only one winner
The complexity comes in when we look at the other two types of behaviour. (GI) and (GC) have an interesting convoluted interplay of their own.
(GI) behaviours are done by an individual. (GC) behaviours, however, are done by large numbers of individuals. As a result (GC)s are an overwhelmingly more powerful force in human development than (GI) behaviours. Wherever they are at odds, (GC) beats (GI) every single time.
When you do a behaviour that's good for you, but bad for civilisation (GI) – like steal a packet of chips from a shop – you gain something. This is basic resource exploitation: the environment has something you want, you compete with resource competitors (the shop owner, staff, legitimate customers) and try to get what you want from the environment without getting your teeth kicked in, or arrested, or overpowered and the precious chip resource taken off you by someone else, or being forced to pay, or otherwise not getting away with it. It's a high risk for a small reward. All (GI) behaviours carry risk.
That's an example of (GC) behaviour (law-making, law-enforcing) defeating a (GI) behaviour (theft). One person versus an entire species? Such a conflict can have only one outcome: the victory of the species. “We” beats “Me” every time the two conflict.
The reason the risk of (GI) is so big is because other humans, and yourself included, manufacture that risk to discourage you from (GI) behaviours. If you were to buy the chips a (G) behaviour – good for everybody – there would be no risk. But there would be a small cost: the cost being far smaller than the risk of resource exploitation.
Where does the risk come from?
Remember I said we manufacture" risk for (GI) behaviours? We do it by writing and agreeing to laws, by electing officials whose policies align with the rules we want or can accept as creating risk against (GI) behaviours we don't like. This is politics, and politics is our means of sorting out, through trial and error, what behaviours are (B), (GI), (GC) and (G), and then writing rules for each of them. Murder is a behaviour that's mostly (B) and sometimes (GI). So we penalise it to try to make it always be (B) so people won't do it. We hate murder! We all need to not be murdered, so most of us are willing to give up murdering other people if they'll all agree not to ever murder us. Our consensus therefore generates a law against the murder behaviour, and the existence of that law manufactures an enormous risk for anyone doing it.
What we should note here is that this act of politics, law-making, arguing over who to elect is (GC) good for civilisation but bad for the individual. Bad? It's stressful and taxing to do! It consumes resources. An act of politics can, and routinely does, temporarily reduce a person's quality of life. People abuse and threaten each other over what they want to outlaw and what they want to allow. In other words we sometimes even resort to (B) behaviours (like threatening) in order to push our agenda to classify OTHER behaviours as (B) or (G) or whatever. It's a messy, sloppy, painstaking, slow, but absolutely inexorable game.
The outcome of politics has never been perfect. In fact there has never once been an outcome to politics, since it's a process that exists in a continuous state for exactly as long as humans exist. In very literal terms the only "political outcome" possible is humanity's extinction. Elections have outcomes, coups d'état and rebellions have outcomes, but politics itself simply marches on.
4. Politics and human needs
The fact is, humans all have the same basic needs (though we fulfill them differently, we all share a need for oxygen, water, food, shelter, safety, etc. as mapped in the Maslow hierarchy). And because we have all the same needs, as individuals we can only bicker over how to fulfill those needs, and what needs to prioritise – which we as individuals base on the needs of our own that are not met. Consider all those security-obsessed people clamouring about government technology policy: do you think they feel secure? No, they do not, which is why they want their elected officials, corporations, and fellow humans to satisfy that need for them and everyone else more than they want to feed the starving. They’re not starving; as far as they’re concerned the food need is met! Security is their issue, because that’s their most pressing need not met. Such people instinctively known it's a need for everybody, so they talk about it in terms of everyone's best interest like the (G) it is. Then they feel threatened when people come at them talking about prioritising food for starving people over the security issue they're pushing because they see it as a threat to their security. Politics, baby!
Because of this narrow scope of political conflict, progress is inevitable. We create and test and scrap and keep systems for enabling human needs. That's all politics is! The systems we use are always imperfect and often terrible, but they will continue to improve over time. That improvement is an inevitability, because human beings are just so compulsively productive.
Thanks for learning!
Today Harvard Business Review (publishers of all manner of goodies at hbr.org) emailed me some sales material with the hardest-hitting subject line in the history of sales email subject lines. It was a thing of strangely cohesive abstract beauty, like Frankenstein's monster would be had he been sewn together from various and sundry chunks of Disney Princesses rather than local cadavers.
Here is Harvard Business Review's incredible email subject line:
Get 7 Free Gifts When You Subscribe and Save Up to 80%
That subject line is a rich vein of psychological sales gold. I counted 10 distinct psychological principles utilised in it. Maybe you can spot even more. Here are mine:
This is email list marketing at it's most thoroughly researched and deviously influential. I am richer for the experience.
If you'd like to learn more about how human psychological principles are veritably milked for their applications in clickbait, here's a clever one from Wired.com:
You’ll Be Outraged at How Easy It Was to Get You to Click on This Headline
Thanks for learning!
FEAR. STEAL. SCARY. DISAPPEARANCE. AFRAID. SCARED SHITLESS.
The existential fear of Artifical Intelligence, as defined by Elon Musk, Stephen Hawking, and their peers, and published on FutureOfLife.org has merit. It's a "large scale existential risk". Read it. Sign it if you share these distinctly Asimovian concerns. They boil down the topic AI perfectly: "Our AI systems must do what we want them to do."
But what about the fear of job automation? The fear of "technological unemployment"?
No, that one is not a matter of existential risk. That's a matter of personal job security, DEFINITELY. Which is a problem in itself, but of a drastically lesser magnitude. The effects of large scale technological unemployment on an economy, however, is merely a catalyst for a very simple and (from a humanitarian viewpoint very overdue) change to capitalism.
It's a matter of existential change.
Maybe it feels like a big threat, after the sustained mass media onslaught we continue to endure. The fact that automation is by its nature an agent of change makes it irresistible for panic-mongers. There's even some overlap between the topics of AI and job automation: Artificial Intelligence precursor software already is a portion of the technology humans use to automate work.
Nevertheless, the fact remains that job technological unemployment is neither inherently good nor inherently bad.
This article substantiates this above claim. Point by point, we'll address every assumption fundamental to the fear of automation and technological unemployment as a "large scale existential risk".
We'll start with a quick understanding of the labor market.
The range of tasks that humans can do as work is as broad as the imagination. But all those tasks fit into two categories: Manual labor and Knowledge work.
A "job" is a set of tasks, of which some are manual and tasks that are knowledge work. The ratio determines if the job is "manual labor" or "knowledge work" but in fact all jobs contain both to some extent. Even jobs that are reliant on the use of expert knowledge still invariably require the movement of matter. Consider what a surgeon does with a scalpel, or an architect with a stylus, or a developer with a keyboard. Conversely, all manual labor requires the interpretation of instructions to action, and the ability to communicate; both by definition are knowledge work tasks.
So now we have a clear understanding of tasks, jobs, manual labor, and knowledge work. No doubt you can relate that to your own situation.
Losing a job you wanted to keep is a problem, no matter how it happens.
Whether all your tasks are more cheaply automated, or you're fired by some hostile, it can be a major problem for the worker.
But a worker losing a job isn't a problem for society as a whole. No matter how certain you are of it. If you're interested understanding the impact to society of the inevitable automation of all work -- and therefore all jobs -- you've come to the right place. We'll analyze every argument espousing automation as a social menace, and you'll also learn about a quantitative solution proposal for improving our primary means of resource allocation in a major contemporary economy, from being "work-based" to a measurably better model. The mathematics prove the financial viability of the solution using current federal data. It's open for discussion.
Assumptions derived from feelings are a poor substitute for a data-based world view. These assumptions don't stand a chance.
Let's get into it!
Assumption 1: "Work is the only acceptable way for humans to gain resources"
This attitude is short sighted. If it were true, technological unemployment would be Capitalist Armageddon brought on by the Four Horserobots of the Apocalypse.
Fortunately, the assumption is false.
It's also illogical and dangerous.
The assumption "work is the ONLY way to make income" is an example of taking a DESCRIPTIVE view of the world — in this case, describing that working in exchange for monetary remuneration is for many people the MOST AVAILABLE and sometimes the ONLY way to accrue money — and turning that description into a PRESCRIPTIVE view of a world in which the assumer believes everyone should live that way no matter what. It's an act of observing a situation and then trying to push that situation on everyone else without bothering to ask one crucial question: "is there a better situation?"
Or perhaps the logic is that since the assumer can't think of a better way, then the situation they're in must be the best one possible: "If I'm in this situation, everyone else should be too. I don't even care if there's a better way. We should do it my way because if other people have a better system but I got this worse system, well... THAT'S NOT FAIR ON ME. So we should all get my system so that everyone gets the same deal and that's fair on everyone. And everyone's experience should be consistent with MY experience because I'm already part way through having it and to change my system would be unfair on me."
Despite the obvious logic errors, this assumption stands against any improvement to the system. When a better system becomes available, owners of this assumption fight against it because they value "others not having more than I have" over "people who have the least having more than they do now".
Human work in exchange for resources is our current model in most of the world. That's a description of the situation. I just described it. This system of resource allocation is often called "Capitalism," and there is a lot of evidence suggesting it to be the best resource sharing system yet trialed in a human society for the purpose of resource allocation. Despite that possibility, the system is profoundly flawed, requires strict democratic influence to regulate, and has failed continuously to set a precedent in which all members of a society run on that system have enough resources on which to live. We have more than enough efficacy data from Capitalism in situ to know unequivocally that it is inadequate.
So if our best model is inadequate, what then? We'll modify it. Henceforth we'll refer to the system as "Hostile Capitalism" since it allows for humans to suffer and die for no other reason than having limited access to the resources they need (which is the bad part), while allowing corporations and individuals to profit in proportion to their work (which is good).
When automation makes human labor increasingly obsolete, there will be an increasing proportion of humans who are unable to accrue adequate resources on which to live if Hostile Capitalism is in effect. Logic tells us the system needs to be modified to ensure that inevitable change to civilisation is not calamitous. To be clear, we are talking about the poorest of us having a shot at survival. If you think "The percentage of humans with adequate access to survival-enabling resources" is NOT the primary metric of civilisation, then get back in your toilet. If you think that percentage is the primary qualitative measure of civilisation, or don't know and want more information before forming an opinion, read on for a quick tangent on Income Inequality.
Income inequality of ANY differential is acceptable ONLY IF every human has adequate income and access to the means to satisfy their human needs.
Automation is therefore harshly and fundamentally at odds with our current capitalism model in which human labor is the primary way humans gain their living resources.
At first glance, it appears automation is set to screw up human resource allocation! Automation and capitalism appear to be in conflict.
Fighting automation is not the answer. That battle can never even be waged, let alone won. Stasis can never beat adaptiveness.
Nor should it, since resource allocation among humans need not FULLY depend on human labor. It can in part. Or not at all, as in the ancient-to-modern use case shared by beggars and monarchs.
Resources can be allocated among humans in other ways.
The automation of labor leads to both greater VOLUME of resources (compare yield of a $300,000 of crop harvester to the yield of $300,000 worth of human labor) but also greater VARIETY of resources (compare clothing availability in the period before the washing machine was invented, vs the period after washing machines became household appliances).
You can deduce from those effects that automation has on resources that automation makes the world better for humans to live in.
Fighting capitalism is not the answer either. For there can never be a consensus on the fair distribution of ALL resources. There is no precedent in human history. The "trickle down" of wealth demonstrably doesn't work.
Adapting capitalism is the answer. Remember, all that's needed is a minimum redistribution of resources that enable the survival of everyone. Not a redistribution of ALL resources like in some miserable socialist communist wasteland.
No. A portion of all resources -- like that accumulated in any democracy's tax system -- CAN be allocated fairly, and without altering the fundamental tenets of capitalism. Concepts like Universal Basic Income only require a portion of society's wealth to be implemented. In so doing, all humans gain sufficient resources on which to subsist, thus freeing them from their DEPENDENCE on labour to live.
The cost of this is increased taxation: a trivial price to pay to ensure that all children eat. But what portion is needed? That's a good problem to solve, and none but the most selfish mind would argue against it after seeing the viability of its mathematics. Which are as follows:
Here we see US Basic Income as a means to meet basic human needs is mathematically and financially viable right now in the year 2016.
Poverty can be eliminated from the USA at the cost of reallocating 40.39% of the nation's GDP. Today.
US Basic Income as a means to meet basic human needs is mathematically and financially viable right now in 2016
(You'll note the credibility of the data sources used in this exercise: World Bank. CIA World Factbook. US Federal Reserve. US Federal Census. All of it less than 1 year old. If you have better data, send it in and let's refactor. Maybe it will deviate from 40.39% as high as 41%! But if you want to verify these data sources, they're listed in the article's footer.)
So 40.39% of GDP reallocated to save the impoverished. We'll call this system "Benign Capitalism", because it does not allow humans to suffer and die for their lack of resources (which is good), while allowing corporations and individuals to generally profit in proportion to their work (also good).
In order to transition from "Hostile Capitalism" to "Benign Capitalism" nothing more than a taxation increase to 40% is needed to preserve Americans' quality of life. To achieve this any tax model can be used, as decided by elected officials chosen by the voting public.
The above equation is mathematical proof of the viability of Universal Basic Income as a means of ensuring every American adult's base needs are met.
But what effect would this have on automation?
The effect would be that the owners of companies would continue to profit. Demand for their products would remain generally unchanged, as the very core of capitalism remains unaltered. That 40% of the US's money isn't leaving the economy, it's just moving through a different channel -- one which by its nature guarantees most of it will continue to move, and not wind up sitting static in billionaires' personal surplus. Some producers would see greater profits as a result of there now being more consumers with money to spend in the market. Other producers would see profit diminish, as the higher taxation of their customers affected their ability to spend. The market would do what it always does: balance. And capitalism would continue -- this time with the sustained participation of ALL individuals in the society.
Automation, therefore, would continue. Businesses would be just as compelled to find cheaper automated solutions to labor needed in their production processes than human labor can offer. And humans would continue to work as and when they saw fit. More importantly, workers would find work they enjoyed, and feel less pressured to do work they dislike due to their basic income freeing them from the need to do work they dislike in order to live -- as is the reality for some of the humans in your community as you read these words.
Is Benign Capitalism fair in bipartisan politics?
Millions of US citizens use the "left-right politics" model every day to discuss their politics. Maybe you're among them. The model is a simplification but a useful lens for helping us understand if the Benign Capitalism model is at odds with any group on that spectrum. Does it favor one end more fairly than the other group? Is it a leftist apocalypse scenario in which the laziest among us are rewarded equally with the most hard-working, and hard-sacrificing individuals? What we want to know is if Benign Capitalism is actually fair.
The social psychologist and author Jonathan Haidt has pointed out that when you speak of fairness to someone with a more "left-leaning" worldview, it means leaving no one behind and creating equal opportunities for everybody. However to someone more conservative the notion of fairness means getting what you deserve and have worked hard for.
Does benign captalism service both those needs? The basic income component satisfies the definition of fairness in which everybody is permitted the means to live. And the capitalism component satisfies the definition of fairness in which reward is directly proportionate to effort: you work hard, you earn more than people who work less hard, same as you do today. Which definition of fairness is favoured more strongly by Benign Capitalism? In fiscal terms we see that the 40% of all wealth being allocated among citizens as basic income means that 60% of all wealth remains available to anyone wishing to work to acquire it.
In left-right political terms, 40% of GDP serves left-wing liberal ideals, and 60% of GDP serves right-wing conservative ideals. That's a 2:3 ratio in favor of the right-wing definition of fairness in which the left-wing definition of fairness is still completely satisfied.
No matter where on the left-right spectrum you put yourself, if Benign Capitalism doesn't sound like a compelling idea for improving the lives of the US populace without political compromise, you can go ahead and viciously subtweet me right now.
Assumption 2. "Some jobs won't get done because some jobs nobody wants to do"
This assumption is a grim one. It assumes that the work we, as individuals, might consider unenjoyable – like cleaning sewers, rubbish collection, or janitorial work – is so offensive to EVERY human that they all would rather it not be done than do it themselves.
In a resource allocation system like UBI where all humans receive enough money to live on, there will always be those who want more money than the minimum. Perhaps a minority, perhaps a majority. Or perhaps everyone, since humans are innately driven to fulfill their many and various desires. But just as human nature still applies, so too do the fundamental rules of capitalism. There is still manual human labor to be done. There are still jobs to do it that pay money. And there will therefore be humans willing to do the work.
It might be that the work for work broadly considered distasteful would pay more than it did in the Hostile Captialism system; in Benign Capitalism human labor has become a more valuable commodity, now that all humans are free to value their own time in the market without being forced to undervalue it in order to stave off death. Sewer-scraping, therefore, costs more money.
Humans will work for additional resources wherever they value the additional income over a portion of their free time. The labor market persists.
And so too does the onward march of exponential automation.
What jobs do you think will be automated foremost? The ones that fewer people want to do, which therefore cost more to employ humans to do, which therefore yield greater savings to the company buying the automation? Seems likely.
Assumption 3. "The market will collapse because human labor will be prohibitively expensive"
Again, let's reiterate: THE RULES OF CAPITALISM HAVEN'T GONE AWAY. Some humans will value their own labor higher than the market does, and get less work as a result. Some will sell their labor more cheaply and get more work. The market will balance, and the work will still get done. Capitalism.
With human labor more expensive overall in Benign Capitalism than in Hostile Captialism, automation will become even more highly sought. Automation itself becomes more valuable.
Consider an hour of the cheapest human labor costing $9, in Hostile Capitalism. Automating that labor to reduce the cost to $1 per hour is a saving of $8 per hour -- at the cost of the human worker who is now in major financial trouble.
Imagine in Benign Capitalism that same hour of human labor now costs $15, because nobody wants to do it too cheaply, and nobody NEEDS to do it that cheaply in order to feed their kids. Automating the same task in the same way to cost $1 per hour is now a saving of $14h -- still at the cost of the human worker, but who faces no major financial trouble.
The comparison shows that while human labor is a now more valuable commodity in Benign Capitalism than Hostile Capitalism, automation is now more valuable as well. Therefore, more automation is required for businesses to maintain profits, and human labor would be most efficiently spent automating labor. Regardless of what happens to the JOB market (in terms of the ratio of human-jobs vs automated-jobs insofar as distinct "jobs" still exist), the LABOR market improves since work is being done for less expense.
Assumption 4: "When labor is done more cheaply by machines there will be less labor available"
In a society in which resources are allocated to enable a minimum standard of living for all members, any member who wants to work for resources, that human can choose to do so in any manner available as before. The complete freedom to work offered by Benign Capitalism ensures that humans who choose to work will be in a position to do work of interest to them -- most so than in Hostile Capitalism, whereby some humans are force to accept the first work opportunity to present itself, regardless of interest level. This is fundamental to the human need of "self-actualisation" as defined right at the peak of the Maslow hierarchy.
When you have possible entrepreneurs in the billions, you have innovation on a grander scale than anything before in civilisation. The result of innovation is invariably greater resource abundance, and greater resource variety.
Assumption 5. "When humans are freed from the need to work, they WILL NOT work"
Anecdotal evidence can certainly be provided for the assumer claiming that they, as an individual, would not work if given the choice. There may be other examples of others who publicly claim that they'd give up all work if doing so didn't diminish their lifestyle. I've felt like that a few times before, when doing work that was pointless and without meaning. But I don't feel that way now that my work contributes to my vision of a better world for humans to live in. The will to give up on work is a temporary feeling; it's not a "type of human".
Can the assumption be disproven then?
Yes, anecdotal evidence at the individual level also applies equally to counter the assumption. Look at any successful entrepreneur. Elon Musk has enough money to live a lavish lifestyle for the rest of his days, yet he busts his ass working for his several companies. Some humans just want to do their work. Some just want something helpful to do. These anecdotes describe behaviours measurable over time, not just feelings over time.
But empirical evidence, of course, is the basis of all good judgement.
Do we have any to validate or invalidate the assumption that humans only work if they need to? Or to show the proportions of humans who will seek out work versus those who would prefer to live off the productivity of others?
Not yet. Universal Basic Income trials are being explored at a national scale in various ways in Europe, notably the work being done in the Netherlands (see citation 8 at the end). Canada and India are investigating and planning their own trials. Even the technology community of California's "Silicon Valley" is getting involved. Efficacy data is near, but not yet here. The knowledge is still being manufactured.
In the mean time we have a hypothesis: basic income will significantly improve human living experience. Such experiments will yield large scale data to empirically show the economic and sociological effects of basic income, whatever they are. The data would then be used to evaluate the adoption and adaptation of the Basic Income model by other wealthy nations. Like the USA, where the economy can easily support it and the only question remaining is: "Should we?"
Until then, assumption 5 remains unanswered.
Assumption 6. "When all labor is done by machines, humans CAN NOT work"
Exponential automation is a process that's slow to start and quick to end. But that end only means the automation of all existing work. New work can be created from any idea -- and when work like viability tests, polling, and market research are automated, entrepreneurship will eventually be a matter of ideation -- an intrinsically human knowledge task.
During the process of automating all work, human labor will shift first out of the realm of necessary manual labour (the OPTION of human manual labor will always exist), continuing the existing trend of human knowledge work uptake. In 1920 the ration of manual laborers to knowledge workers was 2:1. By 1955 the ratio was 1:1. And in 1980, the ratio was 1:2. [Citation 9, 10]
That transitional period was a major milestone in the human labor market, and the exponential growth of that ratio has been in effect ever since. The same pattern is being observed with automation. Note in the below graph published by technologyreview.com in 2013, which shows in 2000 the differential spike between US domestic productivity and "job growth". Note also the trend of separation ever since.
(View this graph and more on the original article by David Rotman https://www.technologyreview.com/profile/david-rotman/ here on https://www.technologyreview.com/s/515926/how-technology-is-destroying-jobs/, where he looks in tremendous, empirical detail at the trends and effects of automation in the labour market)
From the above data, we see two significant trends:
What if these two trends continue? Automation and job growth began to part ways 40 years ago: that's quite the efficacy test. We know we will still need a way to allocate resources, and that working for them isn't viable. We also know that fighting automation is impossible. Benign Capitalism is financially viable and resolves the most pressing issue by decoupling resource allocation from human labor. And it does so without compromising capitalism or the nation's economy. (Yes, I'm clearly trying to sell you this idea. You should buy it, while you still have some disposable income to buy things with.)
Humans will do less work, as technology does more. An increasing portion of the work done by humans will be new types of work that become viable as a result of technology (i.e. the greater resource variety produced by advances in automation). Eventually, the work done by humans will be predominantly innovation: new work, invented to produce resources more efficiently, or to produce new resources that didn't previously exist.
You'll work if you want. And your original ideas will have more value than ever before, thanks to automation.
Assumption 7. "The purpose of human life is to work. When all work is automated, including all innovation, human life will be POINTLESS"
I find this is a difficult thing to hear. It's certainly too difficult for me to think.
The purpose of human life is to experience joy. Not to pursue joy, and not to create it. Specifically to experience it. Pursuit and creation of joy are definitely part of the means. But the experience of joy is the end goal.
To deny that fact is a hypocrisy. Here's a breakdown of why:
From those 3 points we can deduce that every human action is traceable, directly or indirectly, to the pursuit of joy as the end goal.
Consider as many examples as you want. You'll find the logic always holds up.
(By all means cite your clever personal or hypothetical anecdote contrary to these points if you'd like to try to disprove these claims! If you believe the prime directive of human life is NOT to experience joy, it would be valuable to us both to understand the basis of your belief. It would be interesting to know if any of those claims were wrong.)
Regardless of our beliefs, the fact remains that once all work is automated there will be nothing left to humanity BUT the pursuit of joy.
So we'd better make sure our descendants can handle it.
Thanks for learning!
Not done with this topic? Good, because automation is far from done with you. Check out this fantastic, detailed resource from The Atlantic, A World Without Work
And this punchy conversational resource by the genius Eliezer Yudkowsky, who reminds us that labour isn't a finite lump; the scope of possible labour literally matches the scope of the aggregated human imagination.
Or broaden your world view by exploring the citations and data that contributed to this article, upon which this proof of concept for Benign Captialism was based. Inspect the merchandise, so to speak. I'm sure you'll find it all in order.
CITATION 1: Cost of living calculated for the United states, specified by CareerTrends.com http://cost-of-living.careertrends.com/l/615/The-United-States
CITATION 2: 2015 Population of USA, specified by CIA World Factbook: https://www.cia.gov/library/publications/the-world-factbook/geos/us.html
CITATION 3: 2014 USA Populace under 18, specified by US census data: https://www.census.gov/quickfacts/
CITATION 4: US GDP, specified by WorldBank.org http://data.worldbank.org/indicator/NY.GDP.MKTP.CD
CITATION 5: Why GDP is the most valid metric for measuring the size of a nation's economy, specified by Investopedia.com: http://www.investopedia.com/ask/answers/199.asp
CITATION 6: http://psychclassics.yorku.ca/Maslow/motivation.htm
CITATION 7: https://en.wikipedia.org/wiki/Maslow's_hierarchy_of_needs
CITATION 8: http://www.theatlantic.com/business/archive/2016/06/netherlands-utrecht-universal-basic-income-experiment/487883/
CITATION 9: http://www.nickols.us/shift_to_KW.htm
CITATION 10: http://forschungsnetzwerk.at/downloadpub/knowledge_workers_the_biggest_challenge.pdf
A data driven look at the history of automation from The Guardian https://www.theguardian.com/business/2015/aug/17/technology-created-more-jobs-than-destroyed-140-years-data-census
This post is to thank Concept Frontier readers for coming to my website and learning what I have to offer. Here are 30 ideas, for free, for fun, for profit -- basically for you to use however you want.
Earlier today I read a piece by Yann Girard who said "Ideas are the currency of the 21st century." I completely disagree. Ideas are no more valuable than seeds: you only have something to profit from once you've grown it big enough to give you back a resource like timber or food. To me, ideas are just brain seeds.
And since, like Bill Bailey, I have a dandelion mind, I like to throw a bunch of them into the atmosphere.
Plant some of these. See what you get. They're 100% free of obligation, and some of them might even be original!
What do I get out of it? Cognitive exercise.
What do you get out of it? Free ideation, and 30 chances to create something cool.
30 Ideas, Free for Commercial Use
OK, I noticed the last one was way too broad, to the point of YOU having to some of the ideation! Time to stop, have a cup of tea, and a walk.
Still, 30 ideas in 30 minutes is not bad! I had planned to try and do 100 in an hour, but I had to stop at half time due to a feeling of cognitive fatigue.
If you love the concept but none of this stuff appeals or is relevant to you, all is not lost. Besides waiting for me to do another public ideation payload like this one (for my neural exercise and your amusement) your other option is to go bespoke.
That is to say you can hit me up over on Fiverr.com any time. You can answer my meticulously designed set of questions, and purchase your own bespoke idea from the vending machine of my brain.
Any ideas you purchase will belong 100% to you (insofar as an idea can be "owned" -- but I certainly won't use it, recycle it, or even tell it to anyone who isn't you).
I'll even run due diligence and check to see if anything like it exists in the market. Because originality has intrinsic value and so do good manners.
Thanks for learning!
You know that the time you spend alive is your most valuable resource. But sometimes you put off important tasks and screw around doing more interesting stuff even though you know you shouldn't.
If that presumption is untrue, close this page. You're not going to benefit from this.
But if you can relate? You're about to understand yourself a little better.
This article gives you a means for controlling your behaviour and prioritising tasks, by turning your emotions into numerical data you'll use to make logical decisions.
Let's start by acquiring the theoretical model.
Theoretical Model of Distraction
Watch this smashing Concept Frontier InfoPod video. You'll learn how distraction works as a behaviour. That is, how your brain mishandles Interest and Importance so badly that you get distracted from important tasks.
This model explains the principle behind all Last Minute Work ever conducted by frantic human beings since our species first faced the need to work.
Thanks for learning indeed!
Now scoot down for the practical lesson.
The Practical Model
This is how to turn emotions into numerical data. Pretty snazzy stuff.
You'll also find it very easy to do and unbelievably effective.
Note that this is self-reported data, not empirical. And that's absolutely fine, since popping your living skull into an fMRI scanner is not a practical way to decide whether you should keep binge-watching The Walking Dead or tackle your overdue assignment.
It works by giving your logic-loving brain the actual numerical measurements it needs to make data-driven decisions.
Without them, logic is not going to be your primary decision-making force when determining behaviour; your brain would simply use the "default mode" of relying solely on your on-the-fly interpretation of your feelings to determine the priority order of your tasks.
And if you've ever started and completed a huge assignment the night before it was due, then you know just how ugly that can get.
Strong coffee isn't enough. You need a psychological edge.
This is it.
Grab a pen and paper and let's chart your brain's contents.
1. Specify your tasks
Right after you're done here, there's a bunch of stuff you feel compelled to do.
In this article I'll use two example tasks: writing an article OR watch a film.
For your copy, pick any two (or more) task activities of your own. They can be in the same quadrant or different ones. Just make sure you choose tasks you can clearly specify like "work on my thesis" or "play Assassin's Creed". The more specific you position them, the more specific your results will be.
2. Draw a grid
3x3 like this:
3. Rate the Importance and Interest of the first task on a 1-10 scale
In your grid, rate on a scale of 1 to 10 the Importance and Interest of each task.
Start with the Important task, the work. Maybe it's dreadfully boring so interest is low, let’s say 2.
But it’s quite Important you get paid for it (or don't get your head kicked in by your teacher/lecturer/employer), so importance is a 9.
3. Rate the Importance and Interest of other tasks on a 1-10 scale
In this example watching a film is pretty interesting, let's say 8. But it's not really important. 1 out of 10 there. Do this for all the tasks you want to compare.
It's vital you are honest with yourself about the level of interest and importance for each task.
Now you have data. Simple, self-reported data, but data nonetheless.
That was easy, and you could've thought it up yourself! But you didn't. (You're getting it for even less cognitive effort — nice!)
Now you need to use your data table to get a result. In this case, the result will be a priority decision for your tasks.
4. Evaluate your data
In this next part there's an element of trial and error to find what works best for you. And a bit of creativity needed on your part to get it perfect.
But for now we'll use a "default scoring" of simple row addition as a starting point. Don't worry! The decisions you get this way are still going to make your decisions a hell of a lot more logical than they were before you fed your brain's starving logic appetite with delicious raw numbers.
Scoring can be as complicated as you like. This one, offered as a default, just has you add the value from the Interest and Importance columns to give the priority score for that task.
In this case, the work has a score of 11, but watching the film only has a score of 9. The default scoring rule is that the activity with the higher score is the one you do first. Logical, right?
What about complexity?
Are you evaluating big, time-consuming tasks? You can evaluate the same tasks again in a few hours, days, once you've completed them in full or in part. You'll see if and how the Interest and Importance levels change for each task.
If you're someone who values consistency for consistency's sake, apart from being the type of person to absolutely crucify people whenever they change their mind, you may find there's absolute consistency in your Interest and Importance levels of every task.
Interest and importance are supposed to change. Just try and produce data based on how you honestly feel in the moment, and embrace the inherent fluidity of your adaptive human feelings.
The more data you produce (even just as an exercise when you don't need help making a decision), the more insight you will gain into your Interest habits and Importance evaluations. If you record the date and time of each assessment, you will even be able to identify your own behavioural patterns over time. Extremely cool if you're into quantifying yourself!
Are you procrastinating on something important? If you are, give this system a try! Experience turning your feelings on your dreaded Quadrant 3 task (and whatever Quadrant 2 task you're using to indulge your interest!) into a data-driven decision.
In the course of developing this model, producing the video, article, and the system I experienced a delay of FOUR EARTH MONTHS due to distraction.
Making this article and the associated video were Quadrant 1 tasks: Both VERY Important to me, and Interesting to me.
But the distraction I experienced was another Quadrant 1 task with high importance and high interest! It was a project for a client. So even though the importance and interest of producing this content were super high, the external project took precedence because the reliance of my client meant I ranked it slightly higher on the importance scale over that four month period.
This stands as an example of two tasks both in Quadrant 1 (the BEST QUADRANT that is, with high IMPORTANCE and high INTEREST) being evaluated in this very system described above. Meta!
The data for these projects were:
Client project: Imp:10, Int:9
Concept Frontier project: Imp: 9, Int:9
Thanks for learning!
Now go make some excellent decisions, and remember to spend as much time as you can in Quadrant 1.
Concept Frontier's mission is to optimise the three ways humanity uses information: