Straight A-students != Best Innovators

Source: The Conversation, date indeterminate

where do innovators come from? And how do they acquire their skills?

One place – perhaps among the best – is college. Over the past seven years, my research has explored the influence of college on preparing students with the capacity, desire and intention to innovate.

In this time we’ve learned that many academic and social experiences matter quite a bit; grades, however, do not matter as much.

as GPAs went down, innovation tended to go up. Even after considering a student’s major, personality traits and features of the learning environment, students with lower GPAs reported innovation intentions that were, on average, greater than their higher-GPA counterparts.

Additionally, findings elsewhere strongly suggest that innovators tend to be intrinsically motivated – that is, they are interested in engaging pursuits that are personally meaningful, but might not be immediately rewarded by others.

We see this work as confirmation of our findings – grades, by their very nature, tend to reflect the abilities of individuals motivated by receiving external validation for the quality of their efforts.

Perhaps, for these reasons, the head of people operations at Google has noted:

GPAs are worthless as a criteria for hiring.

Here is what our analyses have revealed so far:

  • Classroom practices make a difference: students who indicated that their college assessments encouraged problem-solving and argument development were more likely to want to innovate. Such an assessment frequently involves evaluating students in their abilities to create and answer their own questions; to develop case studies based on readings as opposed to responding to hypothetical cases; and/or to make and defend arguments. Creating a classroom conducive to innovation was particularly important for undergraduate students when compared to graduate students.
  • Faculty matters – a lot: students who formed a close relationship with a faculty member or had meaningful interactions (i.e., experiences that had a positive influence on one’s personal growth, attitudes and values) with faculty outside of class demonstrated a higher likelihood to be innovative. When a faculty member is able to serve as a mentor and sounding board for student ideas, exciting innovations may follow.

Interestingly, we saw the influence of faculty on innovation outcomes in our analyses even after accounting for a student’s field of study, suggesting that promoting innovation can happen across disciplines and curricula. Additionally, when we ran our statistical models using a sample of students from outside the United States, we found that faculty relationships were still very important. So, getting to know a faculty member might be a key factor for promoting innovation among college students, regardless of where the education takes place or how it is delivered.

  • Peer networking is effective: outside the classroom, students who connected course learning with social issues and career plans were also more innovative. For example, students who initiated informal discussions about how to combine the ideas they were learning in their classes to solve common problems and address global concerns were the ones who most likely recognized opportunities for creating new businesses or nonprofit social ventures.

Being innovative was consistently associated with the college providing students with space and opportunities for networking, even after considering personality type, such as being extroverted.

Networking remained salient when we analyzed a sample of graduate students – in this instance, those pursuing M.B.A. degrees in the United States. We take these findings as a positive indication that students are spending their “out-of-class” time learning to recognize opportunities and discussing new ideas with peers.

 

From our findings, we speculate that this relationship may have to do with what innovators prioritize in their college environment: taking on new challenges, developing strategies in response to new opportunities and brainstorming new ideas with classmates.

Time spent in these areas might really benefit innovation, but not necessarily GPA.

Advertisements

IQ Matters for Invention

Source: NBER, Dec 2017

… IQ has both a direct effect on the probability of inventing which is almost five times as large as that of having a high-income father, and an indirect effect through education …

http://truevinelifeline.com/common-sense/social-iq-main-image/ 

… an R-squared decomposition shows that IQ matters more than all family background variables combined; moreover, IQ has both a direct and an indirect impact through education on the probability of inventing, and finally the impact of IQ is larger and more convex for inventors than for medical doctors or lawyers. Third, to address the potential endogeneity of IQ, we focused on potential inventors with brothers close in age. This allowed us to control for family-specific time-invariant unobservables. We showed that the effect of visuospatial IQ on the probability of inventing is maintained when adding these controls.

James Simons – Flatiron Institute

Source: The New Yorker, Dec 2017

The Flatiron Institute, which is in an eleven-story fin-de-siècle building on the corner of Twenty-first Street and Fifth Avenue, is devoted exclusively to computational science—the development and application of algorithms to analyze enormous caches of scientific data. 

The institute’s aim is to help provide top researchers across the scientific spectrum with bespoke algorithms that can detect even the faintest tune in the digital cacophony. … At the Flatiron, a nonprofit enterprise, the goal is to apply Renaissance’s analytical strategies to projects dedicated to expanding knowledge and helping humanity. 

The Flatiron doesn’t conduct any new experiments. Most of its fellows collaborate with universities that generate fresh data from “wet” labs—the ones with autoclaves and genetically engineered mice—and other facilities. The institute’s algorithms and computer models are meant to help its fellows uncover information hidden in data sets that have already been collected: inferring the location of new planets from the distorted space-time that surrounds them; identifying links to mutations among apparently functionless parts of chromosomes. 

Simons has placed a big bet on his hunch that basic science will yield to the same approach that made him rich. He has hired ninety-one fellows in the past two years, and expects to employ more than two hundred, making the Flatiron almost as big as the Institute for Advanced Study, in Princeton, New Jersey. He is not worried about the cost. “I originally thought seventy-five million a year, but now I’m thinking it’s probably going to be about eighty,”

“We’re spending four hundred and fifty million a year,” Simons said. “Gradually, the Simons Foundation International will take over much of the spending.”

He encouraged interaction and debate among the researchers. “Everything was collaboration at Renaissance, or a great deal of it,” he said. “It was a very open atmosphere.” Former colleagues agree that Simons was an exceptional manager. He understood what scientists enjoyed, and often arranged quirky bonding exercises: at one point, Renaissance employees competed to see who could ride a bicycle along a particular path at the slowest speed without falling over.

Taste in science is very important,” Simons told me. “To distinguish what’s a good problem and what’s a problem that no one’s going to care about the answer to anyway—that’s taste. And I think I have good taste.”

A new research center could “prospect for interesting data sets where people intuit that there’s more structure than can be gotten out now, but that aren’t so complicated that it’s hopeless.”

Sharing had been an important part of Renaissance’s culture. “Everyone knew what everyone else was doing,” Simons said. “They could pitch in and say, ‘Try that!’ ” He wants information to flow among groups at the Flatiron Institute, too, so there are plenty of chalkboards in the hallways, and communal areas—coffee nooks, tuffets arranged in rows—where fellows can “sit around and schmooze.” He observed, “An algorithm that’s good for spike sorting—some version of it might conceivably be good for star sorting, or for looking at other things in another field.” One day in June, I passed a chalkboard that was covered with an equation written by David Spergel, the head of the astronomy division. It demonstrated that the way a supernova explosion drives galactic winds also captures the behavior of the movement of waves in oceans and, by implication, the movement of fluids in cells.

Another researcher pointed out that, as powerful as computational science has become, it still relies on the kind of experimental science that the institute does not fund. In an e-mail, the researcher noted, “The predictions from the computation can only ever be as good as the data that has been generated. (I think!)”

University of Cambridge 2017 Study on CryptoCurrencies

Source: U. of Cambridge website, 2017

Bitcoin began operatng in January 2009 and is the first decentralised cryptocurrency, with the second cryptocurrency, Namecoin, not emerging untl more than two years later in April 2011. Today, there are hundreds of cryptocurrencies with market value that are being traded, and thousands of cryptocurrencies that have existed at some point.1

The common element of these diferent cryptocurrency systems is the public ledger (‘blockchain’) that is shared between network partcipants and the use of natve tokens as a way to incentvise partcipants for running the network in the absence of a central authority. However, there are signifcant differences between some cryptocurrencies with regards to the level of innovaton displayed (Figure 1).

The majority of cryptocurrencies are largely clones of bitcoin or other cryptocurrencies and simply feature different parameter values (e.g., diferent block tme, currency supply, and issuance scheme). These cryptocurrencies show little to no innovaton and are ofen referred to as ‘altcoins’. Examples include Dogecoin and Ethereum Classic.2

In contrast, a number of cryptocurrencies have emerged that, while borrowing some concepts from Bitcoin, provide novel and innovatve features that offer substantive differences.

These can include the introducton of new consensus mechanisms (e.g., proof-of-stake) as well as decentralised computng platorms with ‘smart contract’ capabilites that provide substantally diferent functonality and enable nonmonetary use cases.

These ‘cryptocurrency and blockchain innovatons’ can be grouped into two categories: new (public) blockchain systems that feature their own blockchain (e.g., Ethereum, Peercoin, Zcash), and dApps/Other that exist on additional layers built on top of existng blockchain systems (e.g., Counterparty, Augur).4

Maxwell’s Equations: From 20 to 4

Source: ETHW website, date indeterminate

Maxwell’s Equations refer to a set of four relations that describe the properties and interrelations of electric and magnetic fields. The equations are shown in modern notation in Figure 2. The electric force fields are described by the quantities E (the electric field) and D = εE (the electric displacement), the latter including how the electrical charges in a material become polarized in an electric field. The magnetic force fields are described by H (the magnetic field) and B = µH (the magnetic flux density), the latter accounting for the magnetization of a material.

The equations can be considered in two pairs. The first pair consists of Equation 1 and Equation 2. Equation 1 describes the electric force field surrounding a distribution of electric charge ρ. It shows that the electric field lines diverge from areas of positive charge and converge onto areas of negative charge (Figure 3). Equation 2 shows that magnetic field lines curl to form closed loops (Figure 4), with the implication that every north pole of a magnet is accompanied by a south pole. The second pair, Equation 3 and Equation 4, describes how electric and magnetic fields are related. Equation 3 describes how a time-varying magnetic field will cause an electric field to curl around it. Equation 4 describes how a magnetic field curls around a time-varying electric field or an electric current flowing in a conductor.

Original 20 equations

Modern-Day 4 Equations

Heaviside championed the Faraday-Maxwell approach to electromagnetism and simplified Maxwell’s original set of 20 equations to the four used today. Importantly, Heaviside rewrote Maxwell’s Equations in a form that involved only electric and magnetic fields. Maxwell’s original equations had included both fields and potentials.

In an analogy to gravity, the field corresponds to the gravitational force pulling an object onto the Earth, while the potential corresponds to the shape of the landscape on which it stands. By configuring the equations only in terms of fields, Heaviside simplified them to his so-called Duplex notation, with the symmetry evident in the equations of Figure 2. He also developed the mathematical discipline of vector calculus with which to apply the equations. Heaviside analysed the interaction of electromagnetic waves with conductors and derived the telegrapher’s equations of Kirchhoff from Maxwell’s theory to describe the propagation of electrical signals along a transmission line.

Related Resources:

1.  “Maxwell’s Original Equations”, 2011

2.  Maxwell’s Scientific Papers, 8 Equations, date indeterminate

3. IEEE, Dec 2014

Maxwell’s own description of his theory was stunningly complicated. College students may greet the four Maxwell’s equations with terror, but Maxwell’s formulation was far messier. To write the equations economically, we need mathematics that wasn’t fully mature when Maxwell was conducting his work. Specifically, we need vector calculus, a way of compactly codifying the differential equations of vectors in three dimensions.

Maxwell’s theory today can be summed up by four equations. But his formulation took the form of 20 simultaneous equations, with 20 variables. The dimensional components of his equations (the x, y, and z directions) had to be spelled out separately. And he employed some counterintuitive variables. 

The net result of all of this complexity is that when Maxwell’s theory made its debut, almost nobody was paying attention.

It was Heaviside, working largely in seclusion, who put Maxwell’s equations in their present form. 

The key was eliminating Maxwell’s strange magnetic vector potential. “I never made any progress until I threw all the potentials overboard,” Heaviside later said. The new formulation instead placed the electric and magnetic fields front and center.

One of the consequences of the work was that it exposed the beautiful symmetry in Maxwell’s equations. One of the four equations describes how a changing magnetic field creates an electric field (Faraday’s discovery), and another describes how a changing electric field creates a magnetic field (the famous displacement current, added by Maxwell).

4. Mathematical Representations in Science

NP-Complete = Circuit Problem

Source: MIT website,

Any NP-complete problem can be represented as a logic circuit — a combination of the elementary computing elements that underlie all real-world computing. Solving the problem is equivalent to finding a set of inputs to the circuit that will yield a given output.

Suppose that, for a particular class of circuits, a clever programmer can devise a method for finding inputs that’s slightly more efficient than solving a generic NP-complete problem. Then, Williams showed, it’s possible to construct a mathematical function that those circuits cannot implement efficiently.

It’s a bit of computational jiu-jitsu: By finding a better algorithm, the computer scientist proves that a circuit isn’t powerful enough to perform another computational task. And that establishes a lower bound on that circuit’s performance.

First, Williams proved the theoretical connection between algorithms and lower bounds, which was dramatic enough, but then he proved that it applied to a very particular class of circuits.

++++++++++++++++++++++++++++++++++++++++++++++++++++++

Although he had been writing computer programs since age 7 — often without the benefit of a computer to run them on — Williams had never taken a computer science class. Now that he was finally enrolled in one, he found it boring, and he was not shy about saying so in class.

Eventually, his frustrated teacher pulled a heavy white book off of a shelf, dumped it dramatically on Williams’s desk, and told him to look up the problem described in the final chapter. “If you can solve that,” he said, “then you can complain.”

The book was “Introduction to Algorithms,” co-written by MIT computer scientists Charles Leiserson and Ron Rivest and one of their students, and the problem at the back was the question of P vs. NP, which is frequently described as the most important outstanding problem in theoretical computer science.

Crypto Enters its Frenzy Phase

Source: ZeroHedge, Dec 2017

Bitcoin and friends ….