Christopher Alexander’s A Pattern Language: Analysing, Mapping and Classifying the Critical Response | Dawes and Ostwald | 2017

While many outside of the field of architecture like the #ChristopherAlexander #PatternLanguage approach, it’s not so well accepted by his peers. A summary of criticisms by #MichaelJDawes and #MichaelJOstwald @UNSWBuiltEnv is helpful in appreciating when the use of pattern language might be appropriate or not appropriate.

A distinction is made between Alexander’s first theory of architecture (1964), and a second theory (1975-1979) for which he is mostly widely known, and then a third theory (2005-2007).

Christopher Alexander’s ‘first theory’ of architectural beauty was presented in his Harvard doctoral thesis and later published as Notes on the Synthesis of Form (Alexander 1964). The inspiration for this work is Alexander’s belief that the buildings of traditional societies are inherently more beautiful than contemporary architecture. [….]

When applied in practice, Alexander discovered that this process was too demanding for all but the largest design projects.

This led to a second theory, coauthored with collaborators in the Center for Environmental Structure.

Alexander’s second theory, itself a collaborative process, was developed across three canonical books; The Oregon Experiment (Alexander et al. 1975), A Pattern Language (Alexander et al. 1977) and The Timeless Way of Building (Alexander 1979). Collectively these three works constitute one of the 1960s and 1970s most sustained criticisms of modernism. [….]

… intuitive and unconscious processes were vital components of traditional and vernacular architecture … [and] the importance of cognitive cohesion, vitality and piecemeal growth as part of a vibrant built environment … All of these concepts were central to Alexander’s second theory of architecture, which again focused on the inherent beauty of traditional urban spaces and buildings.

The third theory has been less popular, but well known to disciples following Alexander’s work.

Ultimately however, Alexander rejected his second theory of architectural beauty as he felt it had too little generative power and too little focus on geometry. Three decades later he proposed a ‘third theory’ of beauty, which replaced patterns with the generic concept of ‘centres’ and their transformations, in addition to removing much of the neatly packaged social and architectural content that makes his second theory so compelling (Alexander 2002b, c, 2004, 2005; Adams and Tiesdall 2007).

The second theory, particularly A Pattern Language, has had the most influence outside of the built environment. It is on this work that the criticisms are analyzed.

Following the publication of his second theory, Alexander bemoaned a lack of engagement from architectural and design professionals which might be partially explained by criticisms of the development and documentation of this theory (Kohn 2002). The barriers preventing architects from engaging with Alexander’s theory can be broadly categorised into three groups (Table 2).

Fig. 3
Criticism connections and groupings of Alexander’s second theory of architecture: Implementation and outcomes. Numbers correspond to criticism numbers in text and tables, dotted lines indicate groups and sub-groups of criticisms, arrows point from antecedent criticisms to secondary criticisms or groups of criticisms

The first group [5, 6, 7, 8, 9, 10, 11, 12] arise from Alexander’s idiosyncratic understanding of ‘science’ (4) and subsequent issues including an absence of explicit definitions which makes practical engagement with the theory difficult.

The second group [9, 13, 14] focus on Alexander’s ambivalent use of the term ‘empirical’ to describe his theory, the progenitors of which include both his definition of ‘science’ [4] and belief in one ‘right’ way of building [3] (Fig. 2).

The final group [15, 16, 17, 18, 19] contains criticisms primarily related to the development of Alexander’s theory, including issues such as faulty reasoning that arise primarily from his argument that there is only one right way of building [3]. The problems identified in the second and third groups contribute to further criticisms of the implementation and outcomes of Alexander’s theory.

The pursuit of beauty is admirable. The science behind it is difficult.

Reference

Dawes, Michael J., and Michael J. Ostwald. 2017. “Christopher Alexander’s A Pattern Language: Analysing, Mapping and Classifying the Critical Response.” City, Territory and Architecture 4 (1): 1–14. https://doi.org/10.1186/s40410-017-0073-1.

#nature-of-order, #pattern-language

Field (system definitions, 2004, plus social)

Systems thinking should include not only thinking about the system, but also its environment. Using the term “field” as the system of interest plus its influences leaves a lot of the world uncovered. From the multiple definitions in the International Encyclopedia of Systems and Cybernetics , there is variety of ways of understanding “field”. One that is general and useful emphasizes knowing.

— begin excerpt —

1272
FIELD-FUNCTION 2)
“A knowledge-set which expresses the relationship between the exogenous factors present in the system’s environment and the properties of the system’s parts” (J.W. SUTHERLAND, 1974, p.37).

‘ This concept, thus somewhat fuzzily defined, expresses that it is not possible to understand and explain the system without reference to its environment.

It seems however very difficult to establish precisely such a knowledge-set. Until now we do not have any very reliable method for tracking all the elements and interconnexions that should be integrated in such a field-function. This is, for example, a patent weakness of FORRESTER’S Systems Dynamics, underlined by the controversial character of the famed CLUB of ROME studies.

— end excerpt —

That criticism of the “all the elements and interconnexions” is fair, but might be handled as a separate concern from the parties who are privileged to define the boundary of a system of interest (and therefore also the boundary of a field).

The place that I encountered the idea of field comes from Emery and Trist. This happens in a larger context, that are has been explicated by Babüroǵlu as the Emery-Trist Systems Paradigm (ETSP).

— begin excerpt —

The salience of turbulent environments and the concern about adaptation in turbulent environments lead to concentration on the interdependencies in the environment, L22. The third track was established with the publication of the jointly coauthored book entitled Towards a Social Ecology (Emery and Trist, 1973). In this book, social ecology directs the focus to the interdependencies between human institutions and human culture, both as figure and as ground (Vickers, 1973). The ecological emphasis–that of raising the unit of analysis from the single organization to the population of interdependent organizations and institutions–was further developed with the introduction of the “extended social field” (Emery, 1977) and the “organizational ecology” (Trist, 1977) concepts. Therefore, the third track marks liberation from the single social system referential design.

Societal problems, such as environmental degradation and economic revival, could no longer be dealt with solely by individual organizations. Instead, inter-organizational domains (Trist, 1983), composed of members all concerned with the same set of problems, had to be activated, formed, and managed. This implied that there were some missing institutions which lie somewhere between the “micro and macro social scales” (Wright and Morley, 1989). Innovating organizations interconnecting organizational, industrial, societal, community, and personal development constituted what Trist (1978) called “the new directions of hope.” The design principles for interorganizational domains reemphasized the ETSP essentials of participative democracy and participation, power sharing and complementarity, acknowledgment of multiple interest groups, and a negotiated order between them. The search conference methodology was especially suited for domain creation and planning.

The York University group coined a concept of action learning descriptive of what Emery called the new educational paradigm. This contrasts with action research, which expresses the engagement and intervention mode in the first track of the ETSP. Action learning “focused on the common transactional and contextual environments associated with the set of organizations drawn together around the domain issues” (Morley, 1989), as opposed to focusing on the internal environments of single organizations. Action learning facilitates a pro- cess whereby the participants go through a “subjective and a collective trans-formation of consciousness” (Morley, 1989) regarding the existing boundaries of the systems in question. Furthermore, action learning aims at making it possible for learning to occur at the individual, group, organization, and inter-organizational and societal (public) levels. [Babüroǵlu, 1992, pp. 276-277]

— end excerpt —

Another influence, well after the Tavistock years, I was surprised to see in the International Encyclopedia of Systems and Cybernetics, as I read been reading with Pierre Bourdieu.

— begin excerpt —

1268
FIELD (Social) 4)
“An autonomous microcosm within the social macrocosm”(P. BOURDIEU, 2000)

“Autonomous”is a perfect characterization of such a microcosm, as it means that it possesses (i.e. establishes and maintains) its own definitory norms and rules.

Social fields can be observed in all human institutions and organizations– as for ex. in religious groups of any kind, in business, in politics, in the arts and even in scientific disciplines within academic structures.

It is not proper to any culture, but it offers a wide arc of specific forms in accordance with environmental and historical conditions.

Recent research rises the possibility that some precursor forms of social fields may exist in some insect societies, as beehives and termites mounds.

Social fields could conceivably be related to competition for the occupation of the internal social space, and also to some kinds of age classes or cohorts.

— end excerpt —

Finally, a more general sense of field, outside of social systems, is provied by Ashby.

— begin excerpt —

1267
FIELD of system 2)
“The phase-space containing all the lines of behavior found by releasing the system from all possible initial states in a particular set of surrounding conditions” (W.R. ASHBY, 1960, p.28).

ASHBY states:”The concept of “field”… defines the characteristic behavior of the system, replacing the vague concept of what a system “does” or how it “behaves” (often describable only in words) by the precise construct of a
“field” (Ibid).

It is however debatable whether it is always possible to obtain a full knowledge of “all possible states“, let alone of the full phase-space.

Nevertheless, ASHBY offered very interesting insights into the concept of field in his “Introduction to Cybernetics” (chap.9 “Incessant transmission“, where he describes the set of all the possible transitions of a system by matrixes) (1956).

— end excerpt —

In the context of systems definitions, choosing one or more of the above definitions may be appropriate depending on the domain of interest — in particular, with just social systems, or systems more generally.


In the International Encyclopedia of Systems and Cybernetics, The numbers beside the entry mean …

  • The following special markers have been used, in order to enhance the usefulness of the encyclopedia:
  • 1) meaning “systemic on a wide range”, or “general information”
  • 2) meaning “general abstract or mathematical model”, or “methodology”
  • 3) meaning “epistemologica! or ontological aspects”, or “semantics”
  • 4) meaning “practical in human sciences”
  • 5) meaning “more specific or disciplinarian”

In this paper-first encyclopedia, the bolded text is link to other entries.

Reference

Babüroǵlu, Oǵuz N. 1992. “Tracking the Development of the Emery-Trist Systems Paradigm (ETSP).” Systems Practice 5 (3): 263–290. https://doi.org/10.1007/BF01059844 .

François, Charles, ed. 2004. International Encyclopedia of Systems and Cybernetics | 2nd ed. De Gruyter Saur. https://doi.org/10.1515/9783110968019.

International Encyclopedia of Systems and Cybernetics

#field, #field-theory

Emergence (system definitions, 2004)

The term “emergence” is often used to suggest an outcome that is unexpected. Reading a series of three entries from the International Encyclopedia of Systems and Cybernetics (out of order) helps.

Firstly, let’s try emergence … as compared to what?

— begin excerpt —

1060
EMERGENCE and REDUCTION 2) 3)
Any good understanding of a complex system requires a well integrated understanding of the relationships between emergence and reduction.

Properties of a whole cannot make full sense if not sighted as a global network of interactions between parts, which in turn must be duly considered as such.

A very simple example is the study of water (H20).

Its properties are widely different from those of hydrogen and oxygen. However, the characteristics proper to these molecules (for example their electrons shells) are basic for the under- standing of common (or heavy) water. This becomes still clearer when we consider for example the hydrogen atom within the HCl molecule It is commonly said that the whole is more than its parts.

It is however in a sense also less: as atoms enter in combination, they actualize potentially possible relationships, but also preclude others. In short emergence and reductionism offer complementary and necessary views and it is a gross mistake to oppose them in an exclusive way.

In this 1996 paper, K. BAILEY offers important insights about the ways we should use what he calls the upward ladder (bottom up) and the downward ladder (top down).

— end excerpt —

Many times, the interest isn’t really the emergence itself, but emergent properties.

— begin excerpt —

1063
EMERGENT PROPERTIES 2)

1. “Properties of a structural level in a hierarchy that cannot be predicted from the properties of the components of the antecedent level (R.F. FOX, 1988, p. 171).

2.”a) Properties which emerge as a coarser-grained level of resolution is used by the observer.

b) Properties which are unexpected by the observer because of his incomplete data set, with regard to the phenomenon at hand.

c) Properties which are, in and of themselves, not derivable a priori from the behavior of the parts” (T.F.H. ALLEN & T.B. STARR, 1982, p.267).

A good example of emergent property can be found in a system’s autonomy. N. PEGUIRON describes the following very simple situation: “Let us consider an elemental electrical circuit made up from a relay and two switches: the first normally open serves to start the current within the coil of the relay, but it can be seen that this action is conserved when one stops to press the switch; the second one, normally shut, has the opposite effect, but the same property. This very simple circuit presents the property to memorize the last performed commands however this memorization property does not belong to any of the elements. As long as the coils, the contacts and the switches are separately studied, it is not possible to understand, nor to predict the global property of memorization. This property is said to be emergent because it is found neither in the components of the system, nor in their assembled state” (1989, p.9).

It appears only when the system is connected functionally with its environment, the grid.

Numerous other examples of emergent properties can be given, as for example:

  • life, in relation to macromolecules
  • consciousness as a result of numerous interconnexions between neurons
  • a working car, as a meaningful and functional assembly of parts.

Water as a liquid, while its parts, hydrogen and oxygen are both gases in their elemental state

— end excerpt —

Finally, here’s the term appreciated as a process.

— begin excerpt —

1059
EMERGENCE 1) – 2)
1 .”The spontaneous transformation of a set of components from a less coherent state, where the space-time correlation between them is confined to mean free path and mean relaxation collision times, to a more coherent state exhibiting novel, global, dynamical space- time behavior” (R. SWENS0N, 1989, p.188).

2. “Any process whereby the variety and/or constraint of a system changes” (F. HEYLIGHEN, pers. comm.).

Emergence implies discontinuity and innovation in the construction of complexity (W. KARGL, 1991, p.579).

G. PASK, for instance described functional emergence as the recognition “by an external observer” that “a device or agent has acquired a new distinction, concept, structure” (P. CARIANI, 1993, p.28).

SWENS0N comments that this new coherent state is: “many order of magnitude greater than mean free path and relaxation times; in- accessible to, not locatable in, and not reducible to the individual or summative behavior of the separated atomisms” (i.e. elements or components); “the spontaneous creation of a new set of macroscopic constraints that reduce accessible microstates from some initial set Mn to some much smaller subset Ms, to yield a new irreducible level of dynamical space-time behavior. By the transformation Mn to Ms, emergence is always a progressive, asymmetrical time-dependent transformation of matter away from equilibrium” (p.189).

Emergent systems are thus always more complex and less stable than their components. They derive from the appearance of an emergent attractor. Ch. LAVILLE’s concepts about vortexes (1950), and recently D. McNEIL toroidal model of the system (1993a and b), suggest that such attractors are produced by opposing energetic fields.

HEYLIGHEN states: “The emergence of systems of (partially) conserved distinctions cannot be deduced from the properties of lower-level microscopic” distinctions, but may be understood as a process of self-organization, governed by variation and selective retention” (1989, p.382).

J.P. CHANGEUX in turn states: “The highest levels (of organization) emerge progressively throughout biological evolution. They superpose and embed the inferior levels, themselves selected during former evolutive steps (1992, p.707).

HEYLIGHEN also writes: “… such a process will necessarily change the identity of the system itself. It might therefore also be called a system transition. This is a qualitative change, where a new organization or system appears, with properties (potential appearances) that did not exist in the old system. The more usual (“quantitative”) dynamical evolution of a system, on the other hand, is merely a transition within the constrained variety of possible appearances, where neither constraint nor variety undergo any change” (Ibid).

This concept is quite different from SWENSON’s, but closer to PRIGOGINE’s. Here dynamical evolution (or better, adaptation or accomodation) remains in accordance with the organizational closure of the system, which is replaced, or at least transformed, in case of emergence.

Examples may better emphasize differences.

A running athlete accomodates him/herself to the effort by breathing rapidly and strongly, but only for a short time.

One who must go to live for years in some high altitude place, adapts permanently his/her respiratory capacity to this change.

The first animals who left the oceans under- went a radiative evolution of emergent types, changing branchial for aerial pulmonar breathing

— end excerpt —

The numbers beside the entry mean …

  • The following special markers have been used, in order to enhance the usefulness of the encyclopedia:
  • 1) meaning “systemic on a wide range”, or “general information”
  • 2) meaning “general abstract or mathematical model”, or “methodology”
  • 3) meaning “epistemologica! or ontological aspects”, or “semantics”
  • 4) meaning “practical in human sciences”
  • 5) meaning “more specific or disciplinarian”

In this paper-first encyclopedia, the bolded text is link to other entries.

Reference

François, Charles, ed. 2004. International Encyclopedia of Systems and Cybernetics | 2nd ed. De Gruyter Saur. https://doi.org/10.1515/9783110968019.

International Encyclopedia of Systems and Cybernetics

Behavior (a systems definition, 2004)

We use the term “behavior” widely. Do we really know what we mean by that? Here’s an entry from the International Encyclopedia of Systems and Cybernetics.

— begin excerpt —

0247
BEHAVIOR 1) – 4)

1. “The system of interconnected and expedient actions carried out by an organism” (UNESCO-UNEP, 1983, p.6)

2. A repetitive sequence or pattern of actions or operations, and resulting states, characteristic of a specific system.

The second definition is more general than the first, since it can apply to non-living systems, as well as to societies of organisms.

The concept of behavior, when referring to a complex system, may be associated with an interconnected network of actions.

G. PASK gives the following examples: “The behavior of a steam engine is a recurrent cycle of steam injection and piston mouvements that remains invariant. The behavior of a cat is made of performances like eating and sleeping and, once again it is an invariant form selected from the multitude of things a cat might possibly do” (1961, p.18).

Models of numerous different behaviors can be constructed, but one model may be used to modelize analogous behavior of different sys- tems, as for example cyclical oscillations in homeostatic systems, or dissipative structuration in systems far from equilibrium. Behavior is thus a systemic-cybernetic concept of very ample significance and does not exclusively and necessarily have psychological overtones.

However, as stated by R. ESPEJO: “The greater the number of distinct behaviors that are recognized in a situation, the more complex it appears to be” (1988, p.140)

From a somewhat different viewpoint G. KLIR defines behavior as: “A particular set of time-invariant relations between certain quantities”. He then proceeds to define three basic kinds of behavior:

“1. Permanent (real) behavior – the set of all absolute relations;

“2. Relatively permanent (known) behavior – the set of all relative relations of a particular activity;

“3. Temporary behavior – the set of local relations within a distinct section of a particular activity“.(1965, p.30)

He adds: “It should be remembered that permanent behavior is known only in some cases, e.g., when it is directly given in some engineering system” (Ibid)

Still from another perspective, A. ROSENBLUETH, Ν WIENER and J. BIGELOW (1943, p.22) proposed a classification of orders of behavior.

In turn, R.L. ACKOFF establishes the following behavioral classification of systems (1971):

Type of systemBehavior of systemOutcome of behavior
State maintainingVariable but determined (reactive)Fixed
Goal-seekingVariable and chosen (responsive)Fixed
Multi-goal-seeking and purposiveVariable and chosenVariable but determined
PurposefulVariable and chosenVariable and chosen
Ackoff, Russell L. 1971. “Towards a System of Systems Concepts.” Management Science 17 (11): 661–671.

— end excerpt —

The numbers beside the entry mean …

  • The following special markers have been used, in order to enhance the usefulness of the encyclopedia:
  • 1) meaning “systemic on a wide range”, or “general information”
  • 2) meaning “general abstract or mathematical model”, or “methodology”
  • 3) meaning “epistemologica! or ontological aspects”, or “semantics”
  • 4) meaning “practical in human sciences”
  • 5) meaning “more specific or disciplinarian”

In this paper-first encyclopedia, the bolded text is link to other entries.

Reference

François, Charles, ed. 2004. International Encyclopedia of Systems and Cybernetics | 2nd ed. De Gruyter Saur. https://doi.org/10.1515/9783110968019.

International Encyclopedia of Systems and Cybernetics, 2nd ed., 2004

Plans as resources for action (Suchman, 1988)

Two ways of thinking about practice put (i) “plans as determinants of action”, and (ii) “plans as resources for action”. The latter has become a convention, particularly through research into Human Computer Interaction (HCI) and Computer Supported Collaborative Work (CSCW).

While the more durable explanation appears the Suchman (1987) book (specifically section “8.2 Plans as resources for action”, pp. 185-189), a source more readily at hand may be found in a Suchman (1988) article.

4. Plans as resources for action

Taken as the determinants of what people do, plans provide both a device by which practice can be represented in cognitive science and a solution to the problem of purposeful action. If we apply an ethnomethodological inversion to the cognitive science view, however, plans take on a different status. Rather than describing the mechanism by which action is generated and a solution to the analysts’ problem, plans are common sense constructs produced and used by actors engaged in everyday practice. As such, they are not the solution to the problem of practice but part of the subject matter. While plans provide useful ways of talking and reasoning about action, their relation to the action’s production is an open question. [….] [p. 314]

The planning model takes off from our common sense preoccupation with the anticipation of action and the review of its outcomes and attempts to systematize that reasoning as a model for situated practice itself. These examples, however, suggest an alternative view of the relationship between plans, as representations of conditions and actions, and situated practice. Situated practice comprises moment-by-moment interactions with our environment more and less informed by reference to representations of conditions and of actions, and more and less available to representation themselves. The function of planning is not to provide a specification or control structure for such local interactions, but rather to orient us in a way that will allow us, through the local interactions, to respond to some contingencies of our environment and to avoid others. As Agre and Chapman put it “[m] ost of the work of using a plan is in determining its relevance to the successive concrete situations that occur during the activity it helps to organize” (1987a). Plans specify actions just to the level that specification is useful; they are vague with respect to the details of action precisely at the level at which it makes sense to forego specification and rely on the availability of a contingent and necessarily ad hoc response. Plans are not the determinants of action, in sum, but rather are resources to be constructed and consulted by actors before and after the fact. [pp. 314-315]

Suchman (1987)
  • Agre, P., and Chapman, D. (1987a). What are plans for? Paper presented for the panel on Representing Plans and Goals, DARPA Planning Workshop, Santa Cruz, CA., MIT Artificial Intelligence Laboratory, Cambridge, MA.
  • Agre, P., and Chapman, D. (1987b). Pengi: An implementation of a theory of activity. Proceedings of the American Association for Artificial Intelligence, Seattle, WA.

Sources

The best time to plant a tree was twenty years ago

Does “the best time to plant a tree was twenty years ago and the second best time is now” date back further than 1988?

It is time to look long and hard at the value of the urban forest and create the broad-based efforts — in research, funding and citizen participation — needed to improve it. The lesson is, the best time to plant a tree was twenty years ago and the second best time is now.

Moll (1988), p. 41

In 1998, Gary Moll was president with the American Forestry Association. He was recognized in “Gary Moll Wants People and Nature to Work Together” | Fall 2009 | ESRI ArcNews Online. In 2013, he was coauthor of “Shading Our Cities” | Island Press.

The rising prices of Christmas trees in 2019 surfaced this question.

Consumers on the hunt for a Christmas tree have little to cheer about this year, as prices are through the roof due to a shortage of trees that can be traced back to the 2008 financial crisis.

The Great Recession put thousands of American Christmas tree farmers out of business, resulting in far fewer seedlings being planted. As trees have a maturity cycle of 10 years, the lack of supply is just now beginning to bite, pushing up U.S. demand for Canadian Christmas trees and causing higher prices for consumers across the continent. [….]

Paul Quinn suspects the supply shortage will remain for at least a couple years.

“As the economics get better for tree growers you’ll see them planting more trees. Unfortunately, you had to have that foresight 10 years ago,” he said.

Reynolds (2019)

In the pursuit of etymology and the better quote citation, I would welcome seeing earlier uses of the phrase!

References:

#tree, #years-ago

2019/11/05 13:15 “Barriers to Data Science Adoption: Why Existing Frameworks Aren’t Working”, Workshop at CASCON-Evoke, Markham, Ontario

Workshop led by @RohanAlexander and @prof_lyons at #CASCONxEvoke on “Barriers to Data Science Adoption: Why Existing Frameworks Aren’t Working“, with the following abstract.

Broadly, data science is an interdisciplinary scientific approach that provides methods to understand and solve problems in an evidence-based manner, using data and experience. Despite the clear benefits from adoption, many firms face challenges, be that legal, organisational, or business practices, when seeking to implement and embed data science within an existing framework.

In this workshop, panel and audience members draw on their experiences to elaborate on the challenges encountered when attempting to deploying data science within existing frameworks. Panel and audience members are drawn from business, academia, and think-tanks. For discussion purposes the challenges are grouped within three themes: regulatory; investment; and workforce.

The regulatory framework governing data science is outdated and fragmented, and for many new developments, regulations are in a state of flux, or non-existent. This creates an uncertain environment for investment in data science and can create barriers to the widespread adoption of state-of-the-art data science. For instance, the governance of data use and data sharing are unclear, and this may compromise trust in data. Additionally, privacy laws, currently under scrutiny in many countries, may limit how firms can use data in the near future affecting innovation, and planned investments (e.g., Google Sidewalk). As data science technologies and applications change rapidly, the regulatory framework must continually evolve or risk becoming outdated and a hindrance to developments in the field.

Investment risk exists for any project, however data science projects are especially risky for various reasons, including the fundamental role that datasets play. Creating, cleaning, updating, and securing a dataset is a difficult process that requires a substantial investment of resources. And while these are essential processes in order to extract value from data science, they rarely provide value themselves which can be a challenge when making a business case and investment decision and adds risk to the decision to adopt data science practices especially for small- and medium-sized businesses.

The workforce challenges of data science are extensive. It is difficult to recruit qualified candidates due to the specific skill sets needed, and, with more firms seeking to implement the new innovations, this problem is expected to become worse. Additionally, many fear the lack of diversity in the current pool of workers may hinder progress in cases where the data science applications are context specific and would benefit from subject-matter expertise and a diversity of experience.

Outcomes of the workshop are expected to include a report that lists a set of existing practises and high-level barriers to deployment.

Intro from Rohan Alexander (UToronto iSchool), co-organized with, Kelly Lyons (UToronto iSchool), Michelle Alexopoulos (UToronto Economics), Lisa Austin (UToronto Law)

Data science adoption doesn’t seem to have changed, over the past 5 to 10 years

Three themes:

  • Legal frameworks, consent issues, interacting with other jurisdictions
  • Organization challenges:  Difficult to add to old organization, lack of qualified candidates, lack of diversity, pipeline issue of graduates going to other countries
  • Risks:  Have to get clean datasets, so rational at 5% makes sense, or allocation of resources?

Submit questions to Slido.com, #L763

This digest was created in real-time during the meeting,based on the speaker’s presentation(s) and comments from the audience. The content should not be viewed as an official transcript of the meeting, but only as an interpretation by a single individual. Lapses, grammatical errors, and typing mistakes may not have been corrected. Questions about content should be directed to the originator. The digest has been made available for purposes of scholarship, posted by David Ing.

Panel discussants

CASCONxEvoke Workshop Panel
CASCONxEvoke Workshop Panel

Omni.ai

  • Launched by Deloitte 5 years ago
  • Ran survey, four themes
  • Found 16% adoption of AI in industry
  • 1. Lack of understanding:  Only 5% of Canadians think that they will be impacted by AI over the next 5 years, despite having smartphone.
  • 2. Lack of trust:  Data breaches, misuse of data.  Killer robots, not what machine learning is about.  Boston Dynamics video creates misconceptions.  Also chatbots used in customer care, fancy versions of press 1 for this, press 2 for that, yet people use terms like “computers are seeing”.  Computer systems as ominipresent, and don’t trust decision-makers.
  • 3. Lack of awareness:  In Toronto, ecosystem of startups, but difficult from them to link to enterprise companies.  Not getting in front of decision-makers.  Enterprises feel risk of dealing with startups that may not be around for few years.  Hard to advertise, misuse of language.
  • 4. Inability to scale:  Companies don’t know how to adopt.  May hire data scientists, but into corner, and think they’ll do cool stuff and make money.  Have to think of ROI from beginning.  May not have incentives to put into production, after the work is done.  Prove to me it works, versus assume that it’s going to work.

Ajiolomohi Egwaikhide, Senior Data Scientist, IBM Systems

  • What can go wrong?  Bad algorithm, or bad data
  • Customers want to take data, and too cool stuff, but don’t have enough data or right data to solve business problem.  Then end up with backlash.
  • Bad data: 
    • a. Insufficient quantity
    • b. Non-representative training data, or data isn’t telling them what they’re thinking.
    • c. Quality of data, has a lot of outliers, noise, missing data.  Don’t know what they should be collecting.
    • d. Irrelevant features:  Lots of columns of database, but no business capabilities around them
  • Bad algorithm:
    • a. Using fancy algorithms instead of simple models, e.g. survivor algorithm versus simpler logistic regression.  Not selling the right thing. 
    • b. Underfitting
  • People jumping into data

Inmar Givoni, Uber Self-Driving Automobile Division

  • Haven’t defined adoption.
  • John McCarthy said if it works, don’t call it artificial intelligence.
  • There’s a lot of adoption, e.g. a smartphone has 100 instances of what we might call AI.
  • Legal aspects:  e.g. supervised deep learning algorithms, in medical imaging, but then issues with privacy and disagreements from experts on labels, should otherwise be solvable.
  • Risks:  Idea of killer robots.  Self-driving paradox, if get 10% improvement, would have 1.2M die instead of 1.3M, isn’t a personalized argument.
  • Technical:  From software engineering, coding algorithm, get a precision or metric of interest, you could have messed up, you wouldn’t know, because it’s not testable in the same way as regular software.  If can tune parameters, if you don’t have a deep understanding or mathematical intuition, will get people throwing data at it.  Irresponsible use.
  • Algorithms (e.g. Tensorflow) are still experimental, missing debugging, control flow.
  • Policy:  Technology ahead of law.  Ethical considerations, e.g. people messing up traffic signs.  Will continue working on robustness, but people should go to jail for tearing down a traffic sign.
  • Productionization:  Have data scientists, prototype quickly in a sandbox environment, load, train metrics, and they say it will work.  But then to put into a production system, it’s streaming and works in real time.  It doesn’t care about models, it cares about output and costs.  e.g. build a detector 5% better, but then the car doesn’t work as well.  Not good correlation between model-level metric and system-level metric.

Legal perspective (Aaron?)

  • Barrier to adoption:
  • (i) Regulatory:  Laws are antiquated.  Cambridge Analytica, etc., is based on the consent-based model.  People don’t read the terms they click on.  Transparency.  Dealing with disclosure.  We don’t know what we’re agreeing to.
  • (ii) Investment and risk:  Big undertakings, expensive.  Senior management vs. data scientists.  Companies treating data science as just another project.  Data quality.  Considered in a silo.  No architecture for data.
  • (iii) Workforce, trading and labour market:  Requires a lot of expertise, there’s a shortage of people.  Difficult to recruit people with skills.  Expect to become worse.  Lack of qualified labour.
  • Are there ways that universities could be more involved?  Can we build universities, or training protocols, within companies?
  • Data science will become more important, not less.  How to handle?

Panel discussion together.

Clients coming for consultants, because they want to push something forward, and say we can do it ourselves.  Data scientists put into a position to just solve this.

Bad reputation on trust and ethics.

  • Data science equated to steam engine and electricity.
  • Clients aware that they should be doing something.
  • What, but then trust and ethics.
  • In banking, lots of accountability associated with models in production.
  • Line between statistics and machine learning is blurry.
  • Struggling with black box approaches
  • Lack of trust slows us down
  • How can I do it?

Question:  Reputation of data science, and how it impacts work?

When are we going to get the cars off the road?

  • Companies are heavily invested, but it’s an open-ended scientific problem.
  • We don’t know how to do it, but then companies say we’re going to have it by end of the year.
  • Already have assistive technologies that work well if someone is behind the wheel
  • Robustness at 99.9% in a lot of technologies today
  • But have a lot of variability today.
  • If have someone behind wheel that could save from serious errors, or on given route, or under weather conditions, different.

Question:  Unsolvable problems.  Researchers show examples of recognizing cat from dog, but then expect we can do cars.  Problem of lack of understanding, but it’s not zero-one.  They’re feeling overconfident.  A general vision of what AI could do, but we’re not there today.

Interdisciplinarity is complicated.  Executives not making bad decisions, requisite understanding.  It’s economic, technical, trade, privacy, transnational.  Evolutionary, not binary on-off.  Will get to better decision-making frameworks, but will take time.

Question:  Reputation.  AI is doing their job, not true, could augment.  How to correct messaging?

In high school, start of robotics, were promised 4-day work weeks.  Predict that there will no such thing as replacement.

People’s jobs change over time.  Agriculture.  Call centres are replaced by chatbots.  If remove 80% of mundane work, but unemployment work isn’t 50% today from agricultural.  Problem is over what time frame, 5 years or 15 years.

Question:  Automation versus AI?

In school, used term machine learning instead of AI.  Now AI is everything.  Lots of natural language processing.  Technologies are getting more intertwined.

People myopic about technology.  Many jobs get created in unpredicted ways, for non-technical people.  Gig economy.  How many people get married through online dating apps?  Using AirBnB, Uber, are rapid changes in life.

People thinking about how should retrain.  Retraining programs by Bank of Canada aren’t being used.

Questions:  From Twitter — It’s AI when you’re funding.  It’s machine learning when you’re building.  It’s logistic regression when it’s implemented.

First course in machine learning including logistic regression and linear regression.  If you want to call it AI, call it AI, it doesn’t matter any more.

Question:  Mining sector, predictive maintenance.  Anti-fraud, in banking, 80% of workload is logistic regression. 

If you can structure data in a table, use logistic regression.  Get stability, robustness, conversion. 

Educate customers towards getting real value.

History, neural networks became popular, due to availability of data.  Successful on a very small set of applications.  Anything that a human being can do quickly, video-audio-words.  But have age or income, probably won’t get a neural network that works better than logistic regression.  But neural networks would solve problems that weren’t solvable on logistic regression.  Reputation, but then people trying to use neural networks where they don’t apply, in video or audio types.

Question:  Domain-specific knowledge?

Clients bring a lot of domain-specific knowledge.  Haven’t been in a situation where they’ve asked for a specific algorithm.  It has to make business sense for them.

Working with data, have to understand that data.

Question:  16% adoption rate?

16% adoption rate in Canada, in small and large businesses.

Question:  Trust.  Worry about lack of regulations?  Moving target? Locking things down?

Smaller companies don’t think about regulatory aspects.  Larger companies are working with regulators.  Trust issue.

Decisions that are made will become more important.  e.g. judge will make judgement, then can verify, and subject to review.  A construct to review black box decisions?  Proprietary?  Can’t review?  Mortgage applications, university applications, bail applications.

Question:  Regulation.

In Canada, sometimes not taking care of own, e.g. GDPR is in Europe.

Question:  Not releasing datasets.  Understanding why AI is making decisions.

A lot of companies think their data is their competitive advantage.  But at the same time, want to get access to others’ data, so have to share.  Startups in Toronto work on how might share insights without sharing data.

For self-driving, Waymo isn’t shared.  Trying to figure the best way to go at it.  Tough.  Sensitivity to cyber attacks.

Emergence of three data blocks of governance. 

  • Geo-political development.  China, U.S. and EU moving different ways, other countries haven’t moved.  Would like to see Canada take leadership position.
  • Economic:  85% of top company value is intellectual capital and brand value.

Regulations under constant review?

Lawyers take a principled approach.  But things are moving quicker.  e.g. data portability, is it mine or company?  IoT and sensor data is at the bleeding edge.  CCTV cameras that got hacked.  Measure twice and cut once.

AI is a marketing word, used badly.  1960s-1970s interpretation of human consciousness.  Younger interpretation of things that do things for you, not conscious.  Executives want adoption, but not definitions.  Black box, magic.  When asked about adopting AI, are they adopting heuristic algorithms that marketers would call it.

What problem are we trying to solve?  Rate of adaptation, faster or better?  Pace of activity is right?  Whose problem are we trying to solve?  In Canada, have banks, medium-size manufacturing, and lots of small.  Need to find a way to have conversations with IT organization, as only going to give budget.  Technology has enabled a small number of companies to decimate government, control the way we’re living.  Have to look at open source, and then governments take control?

Is there something we can do, if there’s a barrier.

Data science as science.

Research money earmarked as part of IT budget.

Question:  Policy in other jurisdictions?

Transnational, also in NAFTA 2.0.  Constraints by other countries.  If want to set own policies, it’s about economic opportunity.  Don’t want to set up a regulatory framework where companies can’t operate here.

Question:  As users, might we own our own data?

What your phone does, while you’re asleep.  The amount of world knows about you isn’t good.

Trying to write a research proposal, going forward.  How should we approach?

Panel not right format?

Closing, 1 minute each.

Great technology, fourth industrial revolution, will make changes, do have to approach with caution.

What is machine learning useful for, what isn’t it useful for?  It’s mixed up.

AI, ML, data science — it’s the future, right conditions.  Need to do more education of ourselves.  There’s a lot we don’t know.

We do get to create a policy framework.

#artficial-intelligence, #cascon, #data-science, #machine-learning

Own opinion, but not facts

“You are entitled to your own opinions, but not to your own facts” by #DanielPatrickMoynihan is predated on @Freakonomics by #BernardMBaruch 1950 “Every man has a right to his own opinion, but no man has a right to be wrong in his facts”.
Source: “There Are Opinions, And Then There Are Facts” | Fred Shapiro | August 18, 2011 | Freakonomics Blog at http://freakonomics.com/2011/08/18/there-are-opinions-and-then-there-are-facts/

See also:

BARUCH, BERNARD 2.jpg
Bernard Baruch, photographed by Harris & Ewing, https://en.wikipedia.org/wiki/Bernard_Baruch#/media/File:BARUCH,_BERNARD_2.jpg

#facts, #opinions

R programming is from S, influenced by APL

History of data science tools has evolved to #rstats of the 1990s, from the S-Language at Bell Labs in the 1970s, and the <- arrow symbols came from #APL (A Programming Language) in the 1960s.

As you all know, R comes from S. But you might not know a lot about S (I don’t). This language used <- as an assignment operator. It’s partly because it was inspired by a language called APL, which also had this sign for assignment.

But why again? APL was designed on a specific keyboard, which had a key for<-:

“Why do we use arrow as an assignment operator?” | Colin Fay | September 24, 2018 at https://colinfay.me/r-assignment/

In the mid-1980s, I worked primarily in APL, with the special character set keyboard.

By User:Rursus – APL-keybd.svg, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=2456412

I was curious about this, because I saw the alternative to <- with an “assign” function.

A more laborious, though sometimes necessary, way to assign variables is to use the assign function.

assign(“j”, 4)
j
[1] 4

4.2.1. Variable Assignment” | Jared P. Lander | R for Everyone: Advanced Analytics and Graphics

This suggests that an alternative to using the arrow is a more functional style of programming.

#apl-a-programming-language, #r-programming, #s-programming-language

Bullshit, Politics, and the Democratic Power of Satire | Paul Babbitt | 2013

Satire can be an antidote, says Prof. #PaulBabbitt @muleriders , to #bullshit (c.f. rhetoric; hypocrisy; crocodile tears; propaganda; intellectual dishonesty; politeness, etiquette and civility; commonsense and conventional wisdom; symbolic votes; platitudes and valence issues).

While lying is a misrepresentation of the truth, [Harry] Frankfurt argues, BS is an indifference to truth and a misrepresentation of the self—and worse than lying.

[…] BS is not only deceptive but also contributes to the decay of public discourse. Its emptiness, its meaninglessness crowd out substantive discussion. It directs attention to the trivial as much as the false, and it dumbs us down. Unlike the lie, BS derives its effectiveness from the way it says nothing while appearing to say something profound. […]

There’s no cure for BS, but there is a powerful treatment: satire, which can identify and mock BS, resistant as it is to conventional modes of argumentation and dispute. At its best, satire exposes the pretensions of the powerful. Irreverence sometimes troubles us, but irreverence, or at least the tension between reverence and irreverence, is essential to democracy. Reverence inspires an adherence to authority that is undemocratic at its core. In challenging authority, humor performs a critical democratic duty.

Babbitt, Paul. 2013b. “Taking BS Seriously.” The Chronicle of Higher Education, November 18, 2013. https://www.chronicle.com/article/Taking-BS-Seriously/142967.
Illustration by @bloch_serge, in Babbitt 2013b, The Chronicle of Higher Education

The short article in The Chronicle of Higher Education (November 2013) was preceded by a longer article for the American Political Science Association (August 2013).

If the opposite of bullshit is sincerity, then it may seem odd to offer satire as counter-measures. The mechanisms of exposing bullshit come close to bullshit itself. Satire is, after all, insincere. Stephen Colbert of the Colbert Report is a transparent persona that seems to have little to do with the actual person Stephen Colbert. Because it is transparent, though, it would be unfair to accuse Colbert of practicing bullshit in the same way that Callicles does. We may distinguish easily between the obvious and transparent performance of Stephen Colbert and the dissembling of a politician. […]

Humor may be mean spirited and cruel. However, to object to comedy because it is irreverent, because it challenges authority is to deny its most important power. It is precisely those modes that have the best chance of exposing bullshit to the ridicule it deserves. […]

There are good reasons to use such a strategy, and they can help us understand the purpose of satire in our own political environment. The fact that the satirist may pay with her life is perhaps the best evidence we have of its subversive quality. Satire sometimes provokes feelings of violation and violent reactions. Satire is irreverent, and they may target things you or I hold sacred. [….]

The only rule the bullshitter follows is to say anything so long as it works. As long as the bullshitter refuses to abide by standards of transparency and honest exchange of ideas then there is no choice but to engage the bullshitter on different ground. The satirist does not follow the same rules as the serious journalist or pundit. In the main, political humor is cast as critical, even destructive precisely because the humorist does not play by the same rules as “respectable” journalists. The rule breaking characteristic of the satirist is an important element in satire’s subversive character. It is not just that the satirist is targeting important powers of the system, it is that the satirist does not follow the rules either. [….]

There is of course an ugly side to this—a mass, largely uninformed audience may not be able to distinguish between exposing bullshit and mocking serious and sophisticated arguments—pomposity is to an extent a subjective evaluation of others. Comedy is indiscriminate in its targets. It not only poses questions, but it subjects its targets to ridicule. In its leveling, it erases distinctions and hierarchies that in fact are important
elements in any human society. (McWilliams 1995) The informed and concerned public servant is as likely to be ridiculed as the most foolish politician. Comedy can and does expose the tendency of the public to follow the lowest common denominator. Furthermore, the kind of comedy most citizens will see, hear, or read has entertainment as its primary purpose. It does not escape the commercial imperative. One should suspect that if there is a choice between making money and performing a civic duty, money making will win out.

Babbitt, Paul. 2013a. “Bullshit, Politics, and the Democratic Power of Satire.” In American Political Science Association 2013 Annual Meeting. Chicago. https://ssrn.com/abstract=2301256.

References

Babbitt, Paul. 2013a. “Bullshit, Politics, and the Democratic Power of Satire.” In American Political Science Association 2013 Annual Meeting. Chicago. https://ssrn.com/abstract=2301256.

Babbitt, Paul. 2013b. “Taking BS Seriously.” The Chronicle of Higher Education, November 18, 2013. https://www.chronicle.com/article/Taking-BS-Seriously/142967.

Frankfurt, Harry G. 2009. On Bullshit. Princeton University Press. https://press.princeton.edu/titles/7929.html.

#bs, #satire