Category Archives: Uncategorized

Seymour Papert:

Source: MindShift, May 2017

Papert had a vision of children learning with technology in ways that were revolutionary. He believed that kids learn better when they are solving problems in context. He also knew that caring passionately about the problem helps children fall in love with learning. He thought educating kids shouldn’t be about explanation, but rather should be about falling in love with ideas.

Papert also believed strongly in the ways people learn from one another, and he thought technology could play a big role in breaking down barriers between people. In the 1980s when he was talking about these ideas, the technology wasn’t yet capable of what he dreamed, but now it can do more.

Lastly, Papert believed in the transformative power of play — not just carefree play, but “hard play.” He believed when children are challenged through exploration and discovery they can learn a tremendous amount. In this short video Mitch Resnick from MIT Media Lab explains how Papert’s ideas informed his thinking about children, learning and technology forever.

Reasons for an Unpersuasive Argument

Source: Fast Company, Apr 2017

YOU TRIED TO WIN A WAR OF IDEAS

In order to get around that, you need to make the other party feel that you’re both on the same side. That doesn’t mean retreating from your ideas altogether or pretending that they don’t differ when they do. It’s more about acknowledging what you do agree on already, and how your difference of opinion starts from a shared premise.

YOU DIDN’T LISTEN ACTIVELY

When someone feels like they’re really being heard, they become more open to your ideas. So to create that openness, you need to avoid the common trap of thinking about how you’ll respond once somebody else is done talking. Listening is just as much a skill as argumentation, but it’s often harder to teach. One simple way to let others know that they’re being heard is simply to repeat back or paraphrase something you’ve just heard them say, then ask for clarification. This way you can delve deeper into what they’re expressing, instead of just bluntly countering their perspective with your own.

YOU DID HALF THE TALKING

If you’re trying to be persuasive, you’ve got to make the other person feel they’re in control of the situation, not you. It’s often said of the best listeners that they talk a lot less than other people—and that’s true of the most persuasive people as well. While the other party is speaking, listen for opportunities to connect and agree with them. See if you can get insights into their values and the reasons they think the way they do.

.. Keep your mouth shut more, and tune in. The more common ground you can stake out with somebody, the better your shot at persuading them. We’re all more likely to trust people who we think share our beliefs, values, and interests.

YOU GAVE TOO FEW (SINCERE) COMPLIMENTS

If you can find something you genuinely appreciate about the other person and get that across candidly, they’ll be a lot more open to anything else you have to say. It also strengthens your relationship and makes them think of you more favorably. Your ability to see the positive in them elevates you in their eyes and gives more credibility to everything you say and do.

YOU DIDN’T LET THEM THINK IT WAS THEIR IDEA THE WHOLE TIME

one of the best ways to persuade others of your idea is to plant it in their minds and let them believe they came up with it. The best way to do this without being manipulative (or a professional hypnotist) is simply to make suggestions, framing your ideas as possibilities.

It’s all about leaving the other person feeling empowered enough to make their mind up themselves. Again, it’s not about trying to win a contest between two opposing points of view. Take your ego out of it and allow them to take credit for the idea. An idea that we believe we came up with—or even that we’re partly responsible for—always appeals more than one that someone else exclusively generated.

YOU DIDN’T SEEM CONFIDENT AND KNOWLEDGEABLE

We’re predisposed to put our faith in people who sound confident and appear to know what they’re talking about. If you aren’t totally convinced yourself, your hesitation will show through and undermine your credibility in others’ eyes.

 

 

 

 

Patrick Collision (Stripe) interviews Tyler Cowen

Source: Medium, Jan 2017

So I think a central question for macroeconomics is economic growth. I think our understanding of the determinants of growth, just like our understanding of how well science does, is extremely poor. Much of that is ultimately cultural, and bridging economic and cultural ideas we’re very bad at.

You need to be weird and have a theory of your own weirdness that’s different from what your own weirdness actually is because you’re, too, looking at it from the inside. And this area has that. It’s great. It’s inspiring.

You want to optimize for “What makes this country the most creative?” And that’s different than just making us happy. We’re doomed to be the somewhat screwed up, unjust, not-quite-happy-maybe-more-mentally-ill country. And we’re the Atlas, in some other sense, partly carrying some of the world on our shoulders.

Tools for Thought

Source: Acko.Net, 2016

What does it look like to explain algebra in a purely graphical way? Can we tackle more complicated subjects like Fourier Analysis this way? With wub wub?

 

FB: Pride Drives Employee Engagement

Source: Fast Company, Apr 2017

According to an internal study recently undertaken by the company’s HR department and Wharton professor Adam Grant, the key element of employee engagement turns out to be pride in the company. “When people feel proud to work here,” the authors write in the report, “they are more satisfied, more committed, more successful, and more likely to recommend [Facebook] as a great place to work.”

Pride is tied to three primary factors. The first is optimism—believing in the company’s future. The second is mission—caring about the company’s vision and goals. And finally there’s social consciousness—having confidence that company is improving the world.

Related Resource: Fast Company, Apr 2017

1. BELIEVING IN THE COMPANY’S FUTURE

“Pride is fueled by optimism—being able to touch and taste an exciting future for the organization. People are proud to work at Facebook when they expect that the products they build will shape the world, not just inhabit it.”

2. BELIEVING IN THE COMPANY’S VISION

“Our data show that when people are passionate about Facebook’s essential mission—making the world more open and connected—their relationship with the company changes. Work is more than a job or a career: It becomes a calling.”

3. BELIEVING THAT THE COMPANY IS A FORCE FOR SOCIAL GOOD

“When people can actually see how Facebook makes a difference, they find their work more meaningful. It makes them feel connected to something bigger than themselves, and they bring more of themselves to work.”

Nena: ” create your future in the present moment”

Source: Live Nation TV, Sep 2016

On your new album Oldschool there’s a song called “Genau Jetzt” that is similarly moving.
I wrote the song with [Samy Deluxe], a famous German rap guy. We are both people who are very conscious about the present moment. Life is not about the past or the future; you create your future in the present moment. That’s what the song is all about. Even if you don’t understand the lyrics, somehow you get the feeling of the song.

Unexplainable AI

Source: MIT Technology Review, Apr 2017

The mysterious mind of this vehicle points to a looming issue with artificial intelligence.

Deep learning, the most common of these approaches, represents a fundamentally different way to program computers.

This raises mind-boggling questions. As the technology advances, we might soon cross some threshold beyond which using AI requires a leap of faith. Sure, we humans can’t always truly explain our thought processes either—but we find ways to intuitively trust and gauge people. Will that also be possible with machines that think and make decisions differently from the way a human would? We’ve never before built machines that operate in ways their creators don’t understand. How well can we expect to communicate—and get along with—intelligent machines that could be unpredictable and inscrutable?

From the outset, there were two schools of thought regarding how understandable, or explainable, AI ought to be. Many thought it made the most sense to build machines that reasoned according to rules and logic, making their inner workings transparent to anyone who cared to examine some code. Others felt that intelligence would more easily emerge if machines took inspiration from biology, and learned by observing and experiencing. This meant turning computer programming on its head. Instead of a programmer writing the commands to solve a problem, the program generates its own algorithm based on example data and a desired output. The machine-learning techniques that would later evolve into today’s most powerful AI systems followed the latter path: the machine essentially programs itself.

At first this approach was of limited practical use, and in the 1960s and ’70s it remained largely confined to the fringes of the field. Then the computerization of many industries and the emergence of large data sets renewed interest. That inspired the development of more powerful machine-learning techniques, especially new versions of one known as the artificial neural network. By the 1990s, neural networks could automatically digitize handwritten characters.

The workings of any machine-learning technology are inherently more opaque, even to computer scientists, than a hand-coded system. This is not to say that all future AI techniques will be equally unknowable. But by its nature, deep learning is a particularly dark black box.

You can’t just look inside a deep neural network to see how it works. A network’s reasoning is embedded in the behavior of thousands of simulated neurons, arranged into dozens or even hundreds of intricately interconnected layers. The neurons in the first layer each receive an input, like the intensity of a pixel in an image, and then perform a calculation before outputting a new signal. These outputs are fed, in a complex web, to the neurons in the next layer, and so on, until an overall output is produced. Plus, there is a process known as back-propagation that tweaks the calculations of individual neurons in a way that lets the network learn to produce a desired output.

The many layers in a deep network enable it to recognize things at different levels of abstraction. In a system designed to recognize dogs, for instance, the lower layers recognize simple things like outlines or color; higher layers recognize more complex stuff like fur or eyes; and the topmost layer identifies it all as a dog. The same approach can be applied, roughly speaking, to other inputs that lead a machine to teach itself: the sounds that make up words in speech, the letters and words that create sentences in text, or the steering-wheel movements required for driving.

We need more than a glimpse of AI’s thinking, however, and there is no easy solution. It is the interplay of calculations inside a deep neural network that is crucial to higher-level pattern recognition and complex decision-making, but those calculations are a quagmire of mathematical functions and variables. “If you had a very small neural network, you might be able to understand it,” Jaakkola says. “But once it becomes very large, and it has thousands of units per layer and maybe hundreds of layers, then it becomes quite un-understandable.”

David Gunning, a program manager at the Defense Advanced Research Projects Agency, is overseeing the aptly named Explainable Artificial Intelligence program.

“It’s often the nature of these machine-learning systems that they produce a lot of false alarms, so an intel analyst really needs extra help to understand why a recommendation was made,” Gunning says.

Just as many aspects of human behavior are impossible to explain in detail, perhaps it won’t be possible for AI to explain everything it does. “Even if somebody can give you a reasonable-sounding explanation [for his or her actions], it probably is incomplete, and the same could very well be true for AI,” says Clune, of the University of Wyoming. “It might just be part of the nature of intelligence that only part of it is exposed to rational explanation. Some of it is just instinctual, or subconscious, or inscrutable.”

A chapter of Dennett’s latest book, From Bacteria to Bach and Back, an encyclopedic treatise on consciousness, suggests that a natural part of the evolution of intelligence itself is the creation of systems capable of performing tasks their creators do not know how to do.

“If it can’t do better than us at explaining what it’s doing,” he says, “then don’t trust it.”