Process-Function Ecology, Wicked Problems, Ecological Evolution | Vasishth | Spanda J | 2015

With “systemic change” a potential buzzword, determining the validity of research may lead to scholarly authentication through retracing references. Ashwani Vasishth , from Ramapo College of New Jersey, published an article, writing:

These three memes – process-function ecology, “wicked problems” and ecological evolution – may together give us some interesting ways to begin to talk about systemic change in ways that lead to novel insights. When systems are viewed as nested, scale-hierarchic structures, and when they are conceived as constituted by processes and functions, and when we view change processes themselves as being driven by a sophisticated understanding of evolutionary dynamics, then we may come to a place where systemic change can be viewed as more closely approximating actual, pluralistic reality, rather than as the simplifications of reality that emerge from the more mechanical metaphors from classical physics.

Vasishth (2015), p. 111

The 2015 article resonates with me, with citations to many researchers whom I also reference.

To dig deeper, we can go back to the doctoral dissertation at USC.

At the University of Southern California, I am appreciative of Niraj Verma’s early help in learning to think critically about the idea of a systems approach.

Vasishth (2006), p. viii

There’s a direct tie between Niraj Verma and C. West Churchman.

Eugene P. Odum validated my intuitions about the meaning of ecosystem ecology for planning practice, and gave generously of his time to elaborate on his own ideas in this regard.

Vasishth (2006), p. viii-ix

Eugene Odum has been described, by the University of Georgia in 2018, as “the father of modern ecology”.

In a 2005 biography of The National Academies Press, “Eugene P. Odum was recognized nationally and internationally as a pioneer in ecosystem ecology.

Intellectually, I owe much to the work of Timothy F.H. Allen and Thomas W. Hoekstra, who gave me my point of entry into process-function ecosystem ecology.

Vasishth (2006), p. x

Timothy F.H. Allen is listed as a professor emeritus at University of Wisconsin Madison, as a expert in “biological theorist of complexity working in ecological economics”.

Since all of these listed figures are luminaries of the International Society for the Systems Sciences, I’m feeling just one-degree removed from Ashwani Vasisth.

References

Vasishth, Ashwani. 2006. “Getting Humans Back Into Nature: A Scale-Hierarchic Ecosystem Approach to Adaptive Ecological Planning.” Doctoral dissertation, Los Angeles, CA: University of Southern California. http://digitallibrary.usc.edu/cdm/compoundobject/collection/p15799coll17/id/121551. Also accessible at https://www.researchgate.net/publication/237371502_Getting_Humans_Back_Into_Nature_A_Scale-Hierarchic_Ecosystem_Approach_to_Adaptive_Ecological_Planning

Vasishth, Ashwani. 2015. “Reconceptualizing Systemic Change Using an Ecosystem Approach from Process-Function Ecology.” Spanda Journal 6 (1): 111–18. Accessible from https://www.researchgate.net/publication/280080686_Reconceptualizing_Systemic_Change_Using_An_Ecosystem_Approach_from_Process-function_Ecology . Issue accessible as open access at https://spanda.org/assets/docs/spanda-journal-VI,1-2015.pdf

Vasishth (2006) Figure 4: The three meanings of scale. [MEA, 2006]

#ecological-evolution, #ecosystem-ecology, #process-function-ecology, #wicked-problems

The Innovation Delusion | Lee Vinsel, Andrew L. Russell | 2020

As an irony, the 2020 book, The Innovation Delusion by #LeeVinsel @STS_News + #AndrewLRussell @RussellProf shouldn’t be seen as an innovation, but an encouragement to join @The_Maintainers where an ongoing thought network can continue.

The subtitle “How Our Obsession with the New has Disrupted the Work That Matters Most” recognizes actual innovation, as distinct from innovation-speak that hasn’t taken on the hard work of the long view.

The book is easy-to-read, yet well-researched. Some excerpts are provided to entice more readers.

The Innovation Delusion (2020), Penguin Random House

— begin excerpt —

CHAPTER ONE: The Problem with Innovation

THE DIFFERENCE BETWEEN INNOVATION AND INNOVATION-SPEAK

For the rest of the book to make sense, there is a distinction that we must make. The distinction has to do with the way we talk about change—specifically, innovation. There is actual innovation, the profitable combination of new or existing knowledge, resources, and/or technologies. The Austrian economist Joseph Schumpeter argued that innovation is the motive force of economic change, capitalism, and indeed history itself. But genuine innovation is quite distinct from innovation-speak, a breathless dialect of word salad that trumpets the importance of innovation while turning that term into an overused buzzword. As we will see, the world we actually inhabit, including the technologies we use and need, is a very different place from the world described to us by marketing departments and CEOs—replete with the technologies they’ve convinced us to buy and rely on. [….]

To be clear: Innovation is important. It has played an essential role in economic growth and improved quality of life [….]

But much of what passes for innovation is actually innovation-speak. In recent years, economists have noted that the rate of innovation has decreased since about 1970. To put it another way, there’s no evidence that actual innovation or technological change has increased during the period when everyone started talking about innovation. At its most extreme, innovation-speak actively devalues the work of most humans, especially those who do the dirty work that keeps our technological civilization running. And, as we will see, it fails to capture the essence of human life with technology—where maintenance and reliability are far more valuable than innovation and disruption.

[….]

MAINTAINING THE THINGS THAT MATTER MOST

In the chapters that follow, we’ll show how the innovation mindset has led to a devaluation of maintenance and care, with disastrous results. We’ll meet lawyers, teachers, and engineers who have been told they need to be more innovative—even though they know that their success in many ways depends upon resisting the pressures to “fail fast” or “move fast and break things.” We are fascinated by their acts of resistance and how their attempts to maintain their integrity and do their jobs shed light on a different way forward.

In some ways, maintenance is the opposite of innovation. It is the practice of keeping daily life going, caring for the people and things that matter most to us, and ensuring that we preserve and sustain the inheritance of our collective pasts. It’s the overlooked, undercompensated work that keeps our roads safe, our companies productive, and our lives happy and secure.

— end excerpt —

There’s a pointer to the April 2016 essay, “Hail the Maintainers”, that received notoriety.

— begin excerpt —

CHAPTER TWO: Turning Anxiety into a Product

[….]

BEFORE “INNOVATION”

Perhaps the most important factor that led to a positive revaluation of the notion of “innovation” was the role that actual innovation played in the massive economic, technological, and cultural changes that took place after the Industrial Revolution. The scale and scope of these changes, which began in eighteenth-century England before spreading across the world, are hard to comprehend. Some writers understandably call them a “miracle,” though that miracle has always exacted significant costs, including harm to workers and the natural environment. […]

Changes in values and social status were an essential part of this overall shift. In the eighteenth century, inventors were disparaged as “projectors,” an archaic term for promoters or hucksters who pushed dubious new business ventures.1 Very few people aspired to be inventors in a world that extolled military heroes, statesmen, and members of the nobility.

But slowly, starting in the nineteenth century, these ideals underwent a profound transfiguration. Technical creators rose in social status, particularly in England and the United States, whose leaders linked national power to the “industriousness” of their citizens. [….]

REBRANDING PROGRESS AS “INNOVATION

Use of the word “innovation” took off in the years following World War II. This change in language had several causes, and professional economists—those vanguards of the dismal science—played a crucial role. [….]

THE RISE OF THE INNOVATION EXPERTS

To the extent that there was a recipe for the rise of Silicon Valley, the most important ingredients were prepared by Frederick Terman, dean of Stanford’s School of Engineering from 1944 to 1958 and university provost from 1955 to 1965. Terman redefined Stanford by aligning the school’s research with military priorities, and using money from defense contracts to recruit faculty members from the growing electronics industry. Terman also encouraged Stanford faculty to work as consultants in the private sector and create their own companies. The resulting flows of knowledge, people, and technology between the military, industry, and university earned Terman accolades as a father of Silicon Valley. [….]

The late Clayton Christensen, a long-time professor at Harvard Business School, is a good example of this phenomenon. In his 1997 book The Innovator’s Dilemma and a stream of subsequent publications, Christensen spelled out the idea of “disruptive innovation,” a process by which a new technology or business model can upend existing markets, firms, or products. Christensen’s concept caught on like a new, great, powerful drug. All around the world, corporate boardrooms filled with smoke as executives took turns either imagining themselves as the next disrupters, creating the killer app that would blow away the competition, or worrying that they themselves would be disrupted by some unseen start-up in their field. […]

Another example of a hot idea that unraveled on closer inspection was the notion of a “Creative Class” promulgated by the urban planner Richard Florida. In The Rise of the Creative Class and subsequent publications, Florida put forward a kind of Field of Dreams “build it and they will come” theory of public planning, only for hipsters instead of baseball players. Florida argued that the presence of the “Creative Class,” including “scientists and engineers, university professors, poets and novelists, artists, entertainers, actors, designers, and architects,” led to a virtuous cycle of investment and economic growth. […]

Another innovation product that’s recently come under scrutiny is called Design Thinking. Its roots extend back to the 1950s and ’60s, and it began as a reasonable discussion within the professional field of design. The most popular form of Design Thinking today is associated with the fabled design firm IDEO, which is most famous for creating the original Apple mouse in 1980. David Kelley, one of the company’s founders, asserted that a core aspect of good design involved “empathy,” which he characterized as “the ability to see an experience through another person’s eyes, to recognize why people do what they do. […]

We see important similarities between the Christensen, Florida, and Design Thinking stories—most of all, that they center on consulting. An enormous market of organizations and individuals yearn to be innovators and will pay big bucks to become one. Innovation experts benefit from a deep-seated human frailty, which the philosopher Ludwig Wittgenstein called “the craving for generality.” General statements are crucial for living, of course. You aren’t going to last long if you can’t learn principles like “fire burns” or “that red berry is poisonous and will kill you dead.” [….]

CHAPTER THREE: Technology after Innovation

“TECH” VS. TECHNOLOGY

[….] Let us emphasize two fundamental points again. First, technology is not just “tech”; it’s more than digital consumer devices and apps. Second, technology is not just innovation. Most of the technologies we rely on are so common that we barely think of them. They fade into the background of our culture and—crucially—our financial planning. And yet, it’s essential that these things continue to function—as anyone who has experienced a power outage or water main break can attest.

Since technology is not just innovation, and the tools that we use daily consist mostly of old things, then a logical next step is to think about not only what technology is but also when it is.

A piece of technology passes through three basic phases: innovation, maintenance, and decay. We spent the last chapter talking about innovation; now it’s time to focus on what happens after innovation. […]

CHAPTER FOUR: Slow Disaster

[….]

If these examples are any indication, it’s no wonder that the American Society of Civil Engineers (ASCE) regularly gives the United States near failing grades in its Infrastructure Report Card. Nearly 10 percent of the nation’s 613,000 bridges are structurally deficient—meaning that some elements of the bridge require monitoring and/or repair—but the ASCE finds that the country’s dams, levees, and drinking water are even worse off, with mass transit sitting in the sorriest shape of all.

The historian Scott Knowles has created a helpful term for describing these situations: slow disaster. Fast disasters, or what we normally just call disasters, include hurricanes, flooding, tornadoes, earthquakes, industrial accidents—events that sweep in quickly, damaging people’s lives and the technological systems that support our everyday existence. Fast disasters leave lasting wounds. Long after the story-seeking news cameras have packed up and gone, victims are left picking up the pieces. Some businesses and homes never come back. Lives are shattered.

A slow disaster, by contrast, is the accretion of harm from incremental neglect. It happens when children ingest chips from lead paint or when a potholed road becomes unsafe for traffic.

[….]

POSTPONING REALITY

Charles “Chuck” Marohn is a straitlaced, even square, civil engineer. A soft-spoken Catholic and registered Republican, he grew up as a farm boy and served in the National Guard before marrying his high school sweetheart and moving to a small town in the largely rural Midwest. All of this makes Marohn an unlikely candidate for the title “thought leader.” Yet, from his hometown of Brainerd, Minnesota, Marohn and his colleagues have started a growing, influential movement called Strong Towns, a nonprofit that works to make American cities financially resilient. [….]

There was still a tension between what Marohn had witnessed in places like Remer and what he had learned in graduate school, but he didn’t see it yet. “I think that…every planner believes that if you just had the right set of zoning regulations that you can solve every problem. Like, you can cure cancer and have world peace….It’s seductive. You start to believe that you have way more knowledge and way more insight than other people.”

Marohn was open-minded enough to realize he could be wrong, and he had an intellectual awakening after reading Malcolm Gladwell’s essay “Blowing Up.” Gladwell contrasts two different investors, Victor Niederhoffer and Nassim Nicholas Taleb, the latter of whom went on to write bestselling books like The Black Swan and Antifragile. Niederhoffer was in many ways a traditional investor who believed that you could find opportunities for profits in a market through mathematical analysis. In the 1980s and early 1990s, he raked in cash using this method.

Many people attributed Niederhoffer’s success to his expertise, presuming he possessed some form of knowledge that others couldn’t access. But Taleb took a completely different approach. He assumed that he was fundamentally ignorant and could not predict the future—that tomorrow was and is much more uncertain than he could possibly estimate. He used options to bet on dramatic swings in the market, wagering that things would change in ways no one could anticipate.

In Gladwell’s telling, the moral was clear. Niederhoffer’s investment company tanked and was dissolved in 1997 after taking heavy losses—and his next company failed about a decade later. Taleb’s method of building robust, or resilient, strategies was superior, because it would not be undermined by unexpected negative events.

Marohn saw a deep truth in Gladwell’s essay and believed it raised fundamental questions about his own professional fields of planning and engineering. [….]

CHAPTER FIVE: Growth at All Costs

THE INNOVATION DELUSION IN BUSINESS

[….] This chapter will explore the impact of the Innovation Delusion on institutions, including businesses, schools, and hospitals, that structure some of the most important aspects of our lives. These stories provide a more vivid picture of the price of neglect, the pressures of long-term decline, and the dangers of buying into fantasies of renewal and endless innovation. The same kinds of problems and the same patterns detailed in chapter 4 are evident: the preponderance of superficial ideas of innovation and growth; the political risks involved with (responsible) investments in maintenance; and the fact that neglecting maintenance often brings disproportionate harm to people already grappling with social and economic disadvantages. [….]

WHEN THE BUBBLE POPS

[….] Kirsch and the economist Brent Goldfarb define “bubbles” as dramatic changes in asset prices that fail to reflect changes in underlying intrinsic value. In other words, bubbles are fundamentally social phenomena driven by collective behavior. When people keep telling themselves stories that justify continued faith and investment in a particular market opportunity or way of doing things, a narrative develops and strengthens. These narratives, in turn, help to sustain the collective hallucination. [….]

THE COST OF INNOVATION WHERE WE LEARN

[….] The report card notes, “More than half (53%) of public schools need to make investments for repairs, renovations, and modernizations to be considered to be in ‘good’ condition.” But while billionaires flood educators with digital gadgets and promises of “revolution” and “disruption,” the report card found that “four in 10 public schools currently do not have a long-term educational facilities plan in place to address operations and maintenance.” [….]

These problems are not confined to K–12 public schools. Public higher education has also been receiving poor grades, albeit from a different teacher. Each year, Moody’s Investors Service publishes a “higher education outlook,” and the news in 2018 and 2019 was not good.13 The core problem was that universities could not meet their revenue growth targets, resulting in the need to control costs.

The existential financial dilemma of higher education in the twenty-first century is simple. The revenue-generating strategies of the late twentieth century—including annual tuition hikes of 5 to 10 percent, which shifted financial burden to students in the form of interest-bearing loans—are increasingly untenable. Administrators are scrambling to keep up with the rising costs of instruction and student services, but traditional veins of cash, such as philanthropic giving and sponsored research, continue to be dominated by a handful of elite institutions. Universities also experiment regularly with “strategic partnerships” with industrial and government actors, often framed in the dialect of innovation-speak—incubators, innovation parks, and so on. But existing evidence suggests that these gambits rarely create the jobs and economic benefits promised by their promoters. [….]

THE COST OF INNOVATION WHEN WE’RE SICK

[….] the undeniable benefits of innovations in medicine and healthcare have cast a harsh light on the problems that remain. Healthcare is at the heart of a paradox that has aggravated generations of American policy makers, reformers, and health professionals: Americans spend the most money per person on healthcare—up to twice as much as citizens of other high-income countries, according to some studies—but with worse outcomes in areas such as infant mortality and life expectancy.

The pressure to find solutions has created a system that is horribly misshapen and full of contradictions. […]

Each of these problems flows from a common source: Americans aren’t putting their good ideas to work in a systematic way that benefits all of their fellow citizens. And that general problem fits the familiar trend we’ve seen in business and education, where leaders choose to steer their organizations toward innovation, with the implicit assumption that doing so will lead inevitably to financial success. [….]

CHAPTER SIX: The Maintainer Caste

ON THE STATUS WE GIVE DIFFERENT KINDS OF WORK

[….] Ralph plays an important role at the university. He keeps 450 physical Linux machines running—equipment often used by a bunch of rowdy undergraduates—as well as a number of virtual computers. Yet it will surprise no one that the professors are the ones with status at the college. The university website is festooned with innovation-speak, including news items on how professors have introduced this or that innovation, and how the school held hackathons, coding camps, and other events meant to turn students into disrupters. The people who keep all of the computers running in the school are, of course, nowhere to be seen on the webpage. Though their labor is crucial, the IT workers are overlooked and taken for granted.

Ralph’s experience is not unique. Within organizations and society at large, maintenance roles often fall at the bottom of status hierarchies. Nearly all maintainers experience condescension on the job, whether it takes the form of being ignored, talked down to, or taken advantage of. In many organizations, for instance, janitors and maintenance workers are required to wear uniforms—often one-piece coveralls—that mark them out as maintainers. Where do these traditions and mindsets come from? [….]

WHAT WE LEARN AT SCHOOL

[….] From 2012 to 2016, both of us worked at Stevens Institute of Technology, which had (somewhat awkwardly for us) trademarked the motto “The Innovation University.” As part of their senior engineering capstone project presentations, Stevens students were required to describe how their projects were innovative. Of course, most of the projects were not in the slightest innovative, so the primary lesson students learned was how to bullshit and sell themselves as something they were not. Performance and drawing attention to one’s work as novel is an important part of being an “innovator,” something we’ll return to in a moment. But the deeper problem was how badly the Stevens innovation requirement misconstrued the nature of engineering. The sheer reality is that about 70 percent of engineers maintain and oversee existing systems.8 Only a small minority of working engineers have jobs focused on invention and the “research” part of R&D. As a rule engineers are maintainers and operators, not innovators. [….]

WHAT WE LEARN AT WORK

[….] Soon after we started talking about maintainers, people started telling us about Susan Cain’s book Quiet: The Power of Introverts in a World That Can’t Stop Talking. In Quiet, Cain argues that our culture overlooks and undervalues introverts, individuals who prefer to work alone and are reserved and quiet in social situations. Classic self-help texts like Dale Carnegie’s How to Win Friends and Influence People are basically primers on extroverted behavior. Such outgoing behavior is prized and rewarded in organizations and society at large, whereas introverts often find it hard to be heard or recognized.

We see two big connections between Quiet and what we’ve been hearing within The Maintainers community: First, like introverts, maintainers often work quietly in the background, keeping things chugging along while “innovators” get the glory. Our society tends to ignore such people and neither recognizes nor rewards them, which creates all kinds of problems, both for the maintainers and for the society itself. For instance, if employees at an organization find it hard to get credit and promotions for maintaining open-source software—something we’ve heard a lot—they become frustrated, even resentful, and therefore suffer personally. They are also more likely to move on to a different job and be replaced by someone with less experience. In other words, often enough, the software suffers, too—and so do the users who depend on it.

Second, we and others have found that maintainers often (but not always) are introverts. They prefer to work alone and find extended social interaction stressful and unpleasant. Ralph, the IT worker we met at the beginning of this chapter, said that he enjoys working with his other IT peers because they “aren’t showboats,” and they “genuinely enjoy helping people with their problems.” The flip side of this introversion, however, is that maintainers can find it hard to advocate for themselves and their labors. [….]

CHAPTER SEVEN: A Crisis of Care

[….]

MAINTAINING OUR BODIES, MAINTAINING OURSELVES

When the journalist Stephen Dubner interviewed us for an episode of Freakonomics Radio, he said that bodily maintenance was the first thing he thought of after he’d read one of our articles. As you get older, he pointed out, “You spend more and more time maintaining yourself.” Maintenance is the war against entropy—not only in technology but also in biology. Bodily maintenance is a constant part of human life, whether in the form of diet, exercise, or grooming. (Of course, many nonhuman animals clean and preen themselves, too.) [….]

KEEPING UP A HOME

[….] How maintenance works in any given household depends heavily on how much money it has. Wealthy people hire help to keep up their homes and yards. Drive through any U.S. city and you can tell where the rich live and where the poor live first and foremost by how the houses and landscaping are maintained: that golf-course-perfect American lawn of the well-to-do versus the crabgrass-infested, bare-spot-pockmarked yards of the poverty-stricken. In some neighborhoods, homeowners’ associations pressure households to keep entropy in check, lest newcomers let standards slide.

Most families cannot afford help. They make do on their own. But as we’ve seen in other contexts, putting off or deferring maintenance work is a constant temptation. […]

CONSUMER GOODS AND THE RIGHT TO REPAIR

[….] In the meantime, even when an object in our life is expensive enough to repair, it has become much more difficult to do so. Much of this difficulty is because computers have been built into so many things around us—most notably our cars. Automakers first put computers in cars in the 1980s and ’90s to meet federal air pollution standards, but the companies soon saw strategic potential in the technology: They could use computers to monopolize repair and force owners to go to dealerships to get work done. Consumer advocates call these corporate strategies “repair restrictions.” […]

CHAPTER EIGHT: The Maintenance Mindset

[….] in our interviews with successful maintainers, and at the conferences where we’ve brought together people who are passionate about upkeep and care. From all of these conversations, we have distilled three general principles of the maintenance mindset.

First, there is the principle that maintenance sustains success. Maintenance consists of activities that, when done correctly, ensure longevity and sustainability for a company, a city, or a family home. To put the point a different way, no innovation can persist without maintenance. Second, there is the principle that maintenance depends on culture and management. Good maintenance is possible only with good planning that takes an organization’s preexisting culture and values into account. The third principle is that maintenance requires constant care. The best maintainers take a nurturing and supportive approach to their work. They are often detail oriented, creative, and, more than anything else, dedicated to their craft. [….]

CHAPTER NINE: Fix It First

REPAIRING OUR BROKEN INFRASTRUCTURE

[….] The two first steps of adopting the maintenance mindset involve coming to grips—often painfully—with where we are at on deferred maintenance and then starting to think about maintenance costs ahead of time. As we’ll see, both of these steps—but especially the first one—face real obstacles: Oftentimes, we simply lack knowledge and measurements of the conditions of our infrastructural systems, which systems need attention first, how much their repair or replacement will cost, and so on. So, in order to even begin the process we have to get up to speed. […]

MEASURING THE PROBLEM

[….] A big reason the true cost of infrastructure and the maintenance thereof is not visible is a trick of accounting. Municipalities are not required to count infrastructure as liabilities, even though they are on the hook for taking care of them in perpetuity. [….]

[…] Making this accounting shift would be painful, pushing the books of most American cities massively into the red, but it would provide a more realistic picture of where we are and would allow us to grapple with reality, even if only on a triage basis. [….]

LOOKING DOWN THE ROAD

If you ask Chuck Marohn what he recommends localities do about infrastructure, he gets quiet. A big part of his philosophy is that there are no premade cure-all solutions that can be applied in all situations. His general recommendation is that planners and citizens start by paying attention to small details in their communities. Often this means literally walking around by climbing out of our cars (which is the medium through which we often experience small to midsize towns in this country) and getting a feel for the place by examining which neighborhoods are thriving and why, how different parts of the community are or are not connected, and how infrastructure like roads is contributing to this picture.

LEARNING FROM OTHERS

[….] As we have seen repeatedly, standards of maintenance and order have changed a great deal over time. They are culturally dependent. When you ask people who think a lot about infrastructure if there are examples of cultures or nations that are good at maintenance, they bring up a few repeatedly.

One is the Shinkansen, a high-speed rail system in Japan that began operations in 1964. The Shinkansen is a marvel of efficiency and safety, in large part because of its highly developed maintenance practices. No one has ever been killed by an accident on the Shinkansen, and there have only been two derailments in the system’s history—one from an earthquake, another from a blizzard. (For contrast, check out the sprawling Wikipedia page “List of accidents on Amtrak.”) With a top operating speed of two hundred miles per hour, the line has had an average delay of less than one minute per train, with the exception of 1990, when it just surpassed that mark. In 2002, the average delay was twenty-two seconds. […]

SOMETIMES CHEAP SOLUTIONS ARE THE RIGHT SOLUTIONS

[….] In some cases, the goal should be selective and graceful degrowth—paring back our infrastructural burden and getting smaller. People often bring up Detroit in conversations about the future of cities. […]

REFORMING SICK GOVERNANCE

[….] How do we put elected and appointed officials on the hook for maintaining existing infrastructure systems? We’ll need to be creative. […]

CHAPTER TEN: Supporting the Work That Matters Most

[….]

ACCURATE PICTURES OF LIFE WITH TECHNOLOGY

[….] There’s a lot to be said about the data and anecdotes that maintainers provide in their own unfiltered voices. We encourage you to look them up on social media. You won’t find anyone clamoring for simulators, Big Data, biometrics, or macro-innovation, that’s for sure. Rather, what you will discover are suggestions and needs that fall into the categories we’ve observed across all groups of maintenance and care workers: the need for better material rewards (such as pay and benefits); better intangible rewards (such as recognition and respect); and suggestions for fighting burnout by creating more space for maintainers to revel in the intrinsic joy of their work. We’ll take these in turn.

PAY THE MAINTAINERS

[….] Many of the most popular digital platforms are sustained by unhealthy labor models. […] Analysts use a variety of terms to refer to these workers—“code janitors,” “commercial content moderators,” “ghost workers,” “microworkers”—which speak volumes about the status bestowed on this form of labor. [….]

Companies should account for the importance of maintainers, protect them from undue harm, and compensate them in a way that reflects their contributions to these enormously profitable systems.

EATING OUR YOUNG

[….] Herein lies one of the most poignant ironies of the digital age. The promoters of software and digital technologies have long promised their benefits for enhancing community and connectivity—but in many cases these technologies are being used to reduce human contact. To resolve this tension, our society needs talented and empathetic people who understand the importance of connections, know how to make other people feel like their perspectives are valid, and are able to direct their frustrations and concerns down a productive path. Our digital systems and digital societies need maintenance and care in this vital area. [….]

CHAPTER ELEVEN: Caring for Our Homes, Our Stuff, and One Another

[….] While repair cafés, fix-it clinics, and similar events are still relatively rare in the United States, they are part of a growing movement that seeks to help people maintain and repair their own things—or at least be able to take them to local repair shops. These movements bring up a broader question of what we can do to make our world more maintainable, more caring, and thus more sustainable. What would it be like to live in a more caring world? There are many ways to answer that question: We can make improvements as individuals and members of households; respond collectively as communities; and effect change through public policy at the local, state, and federal levels.

EPILOGUE: From Conversation to Action

[….] We (Lee and Andy) are two of the three codirectors of The Maintainers, a global interdisciplinary and interprofessional community that examines maintenance, repair, infrastructure, and the ordinary work that keeps our world going. Together with our third codirector, Jessica Meyerson, we have organized a variety of activities that include conferences, video discussion groups and seminars, email and social media chatter, and focused, in-person convenings of experts in specific fields like digital archives or workforce development. From these activities we have contributed to coalitions around policy issues, such as the right-to-repair campaign, and we have secured grant funding for projects to build advocacy tool kits for librarians and archivists.

— end excerpt —

In my journey through the systems sciences, I’ve learned to be wary of intervening in a system that we don’t understand. This might be extended to a caution to not intervene with an innovation that we can’t maintain.

References

Russell, Andrew L., and Lee Vinsel. 2016. “Hail the Maintainers.” Aeon, April 7, 2016. https://aeon.co/essays/innovation-is-overvalued-maintenance-often-matters-more.

Vinsel, Lee, and Andrew L. Russell. 2020. The Innovation Delusion: How Our Obsession with the New Has Disrupted the Work That Matters Most. Penguin Random House. https://www.penguinrandomhouse.com/books/576816/the-innovation-delusion-by-lee-vinsel-and-andrew-l-russell/.

#innovation, #maintainers

Republishing on Facebook as “good for the world” or “bad for the world” (NY Times, 2020/11/24)

An online social network reproduces content partially based on algorithms, and partially based on the judgements made by human beings. Either may be viewed as positive or negative.

The trade-offs came into focus this month [November 2020], when Facebook engineers and data scientists posted the results of a series of experiments called “P(Bad for the World).”

The company had surveyed users about whether certain posts they had seen were “good for the world” or “bad for the world.” They found that high-reach posts — posts seen by many users — were more likely to be considered “bad for the world,” a finding that some employees said alarmed them.

So the team trained a machine-learning algorithm to predict posts that users would consider “bad for the world” and demote them in news feeds. In early tests, the new algorithm successfully reduced the visibility of objectionable content. But it also lowered the number of times users opened Facebook, an internal metric known as “sessions” that executives monitor closely.

“The results were good except that it led to a decrease in sessions, which motivated us to try a different approach,” according to a summary of the results, which was posted to Facebook’s internal network and reviewed by The Times.

The team then ran a second experiment, tweaking the algorithm so that a larger set of “bad for the world” content would be demoted less strongly. While that left more objectionable posts in users’ feeds, it did not reduce their sessions or time spent.

That change was ultimately approved. But other features employees developed before the election never were.

Roose, Isaac and Frenkel (2020), New York Times

Scholars might ask if the information system is designed as an open system that sweeps in new information, or if the information that circulates becomes self-referential and seal-sealing.

This might lead to a deeper look on the inquiring system behind the creation and dissemination of knowledge, e.g. “Inquiring systems and asking the right question | Mitroff and Linstone (1993)“.

References

Kevin RooseMike Isaac and Sheera Frenkel | “Facebook Struggles to Balance Civility and Growth” | New York Times, November 24, 2020 at https://www.nytimes.com/2020/11/24/technology/facebook-election-misinformation.html (reprinted in the Toronto Star, December 19, 2020).

#algorithms, #facebook, #inquiring-system, #inquiry-system

1969, 1981 Emery, System Thinking: Selected Readings

Graduate students in Social Systems Science at the Wharton School at the University of Pennsylvania (graduating 1975-1988) — the program led by Russell Ackoff — were guided to read a Penguin paperback collection of articles. Across multiple editions, the content changed. Long out of print, the earliest editions are difficult to find.

From the Internet Archive, we can resurrect an entry (circa 2007) on the Collection and Resources section of the Systems Sciences Connections Conversation. This annotated list of tables of contents and excerpts from each edition “Introduction” may be helpful to readers who want a sense of the articles that might otherwise be accessible as journal articles.


There are multiple editions of this book. It’s a bit confusing that the 1969 version was first published as a single volume, and the 1981 version seems to have added a second volume. We should get the table of contents for each.

F. E. Emery (editor), Systems thinking : selected readings, Penguin, 1969.

  • 398 pages
  • ISBN: 0140800719

F. E. Emery (editor), Systems thinking : selected readings, Penguin, 1971.

  • 398 pages

F. E. Emery (editor), Systems thinking : selected readings, Penguin, 1981.

  • ISBN: 0140803955 (v.1) Rev. ed. published with the addition of a second volume.
  • ISBN: 0140803963 (v.2)

Emery 1981, Volume 1

Introduction to Volume 1 and 2

Introduction to Volume 1, First Edition

Introduction to Volume 1, Revised Edition

Part One, Precedents to Systems Theory

1. A. Angyal (1941), “A logic of systems”

  • Excerpt from chapter 8 of A. Angyal, Foundations for a System of Personality, Harvard University Press, 1941, pp. 243-61

The Structure of Wholes
System and Gestalt

2. J. Feibleman and J. W. Friend (1946), “The structure and function of organization”

  • J. Feibleman and J. W. Friend, “The structure and function of organization”, Philosophical Review, vol. 54 (1945), pp. 19-44.

Part Two, Properties of Open Systems

3. W. Koehler (1938) “Closed and open systems”

  • Excerpt from chapter 8 of W. Koehler, The Places of Value in the World of Fact, Liveright, 1938, pp. 314-28.

4. L. von Bertalanffy (1950), “The theory of open systems in physics and biology”

  • L. von Bertalanffy, “The theory of open systems in physics and biology”, Science, vol. 111 (1950), pp. 23-9.

5. W. R. Ashby (1956), “Self-regulation and requisite variety”

  • W. R. Ashby, Introduction to Cybernetics, chapter 11, Wiley, 1956, pp. 202-18.

6. V. I. Kremyanskiy (1958), “Certain peculiarities of organisms as a ‘system’ from the point of view of physics, cybernetics and biology”

  • V. I. Kremyanskiy, “Certain perculiarities of organisms as a ‘system’ form the point of view of physics, cybernetics and biology”, General Systems, vol. 5 (1960), Society for General Systems Research, pp. 231-30. [This paper first appeared in Russian in Voprosy Filosofii, August (1956), pp. 97-107.

7. G. Sommerhoff (1969), “The abstract characteristics of living systems”

  • This paper was first published in the first edition (1969) of this volume.

Part Three, The Environment of a System

8. M. P. Schützenberger (1954), “A tentative classification of goal-seeking behaviours”

  • M. P. Schützenberger, “A tentative classification of goal-seeking behaviours”, Journal of Mental Science, vol. 100 (1954), pp. 97-102.

9. H. A. Simon (1956), “Rational choice and the structure of the environment”

  • H. A. Simon, “Rational choice and the structure of the environment”, Psychological Review, vol. 63 (1956), pp. 129-38

10. W. R. Ashby (1960), “Adaptation in the multistable system”

  • Excerpt from chapter 16 of W. R. Ashby, Design for a Brain, Wiley, 2nd edn, 1960, pp. 205-14

11. F. E. Emery and E. L. Trist (1965), “The causal texture of organizational environments”

  • F. E. Emery and E. L. Trist, “The causal texture of organizational environments”, Human Relations, vol. 18 (1965), pp. 21-32.

12. D. Cartwright and F. Harary (1977), “A graph theoretic approach to the investigation of system-environment relationships”

  • D. Cartwright and F. Harary, “A graph theoretic approach to the investigation of system-environment relationships”, Journal of Mathematical Sociology, vol. 5 (1977), pp. 87-111.

13 F. E. Emery (1976), “Causal path analysis”

  • Excerpt from F. E. Emery and C. Phillips, Living at Work: Australia, Canberra, Australian Government Publishing Services, 1976, Apps. B and C.

Part Four: Human Organizations as Systems

14. P. Selznick (1948), “Foundations of the theory of organizations”

  • P. Selznick, “Foundations of the theory of organizations”, American Sociological Review, vol. 13 (1948), pp. 25-35.

15. F. E. Emery and E. L. Trist (1960), “Socio-technical systems”

  • F. E. Emery and E. L. Trist, “Socio-technical systems”, in C. W. Churchman and M. Verhulst (eds.), Management Science, Models and Techniques, vol. 2, Pergamon, 1960, pp. 83-97.

16. E. Nagel (1956), “A formalization of functionalism”

  • E. Nagel, “A formalization of functionalism”, Logic Without Metaphysics, Free Press, 1956, pp. 247-83.

17. R. L. Ackoff and F. E. Emery (1972), “Structure, function and purpose”

  • R. L. Ackoff and F. E. Emery, On Purposeful Systems, London, Tavistock, 1972, New York, Aldine Atherton, 1972, chap. 2

18. W. M. Sachs (1976), “Toward formal foundations of teleological systems science”

  • W. M. Sachs, “Toward formal foundations of teleological systems science”, General Systems, xxi (1976), pp. 145-54.

Emery 1982, Volume 2

Introduction

Part One, Perspectives on Systems Thinking and Systems Analysis

1. N Jordan (1968), “Some thinking about ‘system'”

  • N. Joradan, Themes in Speculative Psychology, chap. 5, London, Tavistock, 1969 [sic], pp. 44-65.

2. I. R. Hoos (1972), “Methodology, methods and models”

  • Excerpt from chapter 5 of I. R. Hoos, Systems Analysis in Public Policy, University of California Press, 1972, pp. 124-36.

3. F. E. Emery (1973), “Planning for real but different worlds”

  • Excerpt from chapter 12, ‘Educational planning and strategic innovation’, in G. S. Harman and C. Selby Smith (eds.), Designing of a New Education Authority, Education Research Unit, Australian National University, 1973.  Previously reproduced in R. L. Ackoff (ed.), Systems and Management 1974, New York, Petrocelli.

4. H. W. J. Rittel and M. M. Webber (1974), “Dilemmas in a general theory of planning”

  • H. W. J. Rittel and M. M. Webber, “Dilemmas in a General Theory of Planning”, chapter 12 in R. L. Ackoff (ed.), Systems and Management Annual 1974, New York, Petrocelli, pp. 219-33.

Part Two, Systems Thinking about Individuals and Groups

5. F. Heider (1946), “Attitudes and cognitive organization”

  • F. Heider, “Attitudes and cognitive organization”, The Journal of Psychology, vol. 21 (1946), pp. 107-21.

6. M. C. Greco (1950), “Neurosis as a system property of group life”

  • Excerpts from M. C. Greco, Group Life, New York, Philosophical Library, 1950.

7. S. S. Tomkins (1962), “Image, purpose and affect”

  • Excerpt from S. S. Tomkins, Affect — Imagery — Consciousness, vol. 1, New York, Springer, 1962, pp. 17-24.

8. A. Angyal (1965), “Personality as a hierarchy of systems”

  • Excerpt from A. Angyal, Neurosis and Treatment, New York, Wiley, 1965, pp. 48-58.

9. S. E. Asch (1952), “The individual and the group”

  • Excerpts from S. E. Asch, Social Psychology, New York, Prentice-Hall, 1952, pp. 128-37, 257-63

10. I. Chein (1954), “The environment as a determinant of behavior”

  • I. Chein, “The environment as a determinant of behavior”, The Journal of Social Psychology, vol. 39 (1954), pp. 115-27.

11. M. Selvini Palazzoli, L. Boscolo, G. Cecchin and G. Prata (1975), “Paradox and counterparadox: a new model for the therapy of the family in schizophrenic transaction”

  • M. Selvini Palazzoli, L. Boscolo, G. Cecchin and G. Prata, “Paradox and counterparadox: a new model for the therapy of the family in schizophrenic transaction”, in J. Jørstad and E. Ugelstad (eds.), Schizophrenia 1975, Oslo, Universitetsførlaget.

Part Three, Systems Thinking and the Communicative Act

12. F. Heider (1958), “Language as a conceptual tool”

  • Excerpt from F. Heider, The Psychology of Interpersonal Relations, New York, Wiley, 1958, pp. 7-18

13. J. de Rivera (1969), “The concepts of anger and aggression”

  • J. de Rivera, “The concepts of anger and aggression”, Psychology Department, New York University, 1969, pp. 15-32 and 42-5 (mimeographed paper).

14. W. Labov and D. Fanshel (1977), “Rules of discourse”

  • Excerpt from W. Labov and D. Fanshel, Therapeutic Discourse: Psychotherapy as Conversation, New York, Academic Press, 1977, pp. 74-88.

Part Four, On Hierarchical Systems

15. P. G. Herbst (1976), “Non-hierarchical organizations”

  • Excerpt from P. G. Herbst, Alternatives to Hierarchies, Leiden, Martinus Nijhoff, 1976, pp. 29-40

16. S. Beer (1972), “The multinode — Systems Five”

  • Excerpt from S. Beer, Brain of the Firm, London, The Professional Library, 1972, pp. 253-63.

17. G. Sommerhoff (1974), “Hierarchies of goals and subgoals”

  • Excerpt from G. Sommerhoff, Logic of the Living Brain, London, Wiley, 1974, pp. 98-103.

Part Five, Ecosystems

18. C. Geertz (1971), “Two types of ecosystems”

  • Excerpt from C. Geertz, Agricultural Involution: The Processes of Ecological Change in Indonesia, Berkeley, University of California Press, 1971, pp. 15-37.

19. M. Harris (1975), “Mother Cow”

  • Excerpt from M. Harris, Cows, Pigs, Wars and Witches, London, Hutchinson, 1975, pp. 11-31.

20. R. R. Curry (1976/77), “Watershed form and process: the elegant balance”

  • R. R. Curry, “Watershed form and process: the elegant balance”, Geology, vol. 480 (1977), pp. 1-27.  Extracted in Co-Evolution Quarterly, winter (1976/77), pp. 14-21.

Part Six, Redesigning Systems

21. R. L. Ackoff (1968), “Toward an idealized university”

  • R. L. Ackoff, “Toward an idealized university”, Management Science, vol. 15, no. 4. (December 1968), pp. B-121-30.

22. J. B. Channon (1976), “Work-settings”

  • J. B. Channon (1976), “Work-settings”, Military Review, May (1976), pp. 74-87.

23. F. E. Emery (1977), “The assembly line: its logic and its future”

  • Excerpt from F. E. Emery, Futures We are In, Leiden, Martinus Nijhoff, 1977, pp. 102-15.

Part Seven, System Thinking and Our Future Governance

24. F. E. Emery (1976), “Adaptive systems for our future governance”

  • F. E. Emery, “Adaptive systems for our future governance”, National Labour Institute Bulletin (New Delhi), vol. 2 (1976), pp. 121-9.

25. S. Beer (1975), “On heaping our science together”

  • S. Beer, “On heaping our science together”, in C. W. Churchman (ed.) Systems and Management Annual 1975, New York, Petrocelli/Charter, pp. 469-84.

Part Eight, Ideals and Common Ground

26. F. E. Emery (1977), “The emergence of ideal-seeking systems”

  • Excerpt from F. E. Emery, Futures We are In, Leiden, Martinus Nijhoff, 1977, pp. 67-91.

27. F. E. Emery (1976), “Searching for common ground”

  • F. E. Emery, “Searching for common ground”, in M. Emery (ed.), Searching, Canberra, Centre for Continuing Education, A N U, 1976, pp. 45-51.

Introduction to Volume 1 and 2

The last reading in the first edition (1969), by M. Ways, was first published in January 1967 and entitled ‘The Road to 1977’.  [p. 8]

[….]

The original volume or readings has been revised to reflect major theoretical developments and the emergence of promising methdologies.

These, however, have not been the only trends in systems thinking.  It seems to me that there are at least five trends represented in volume 2:

First, a greater concern for planning that is adaptive and participant (Reading 3 and 4 in Part 1; Ackoff, 1974).

Second, a new non-mechanical image of man’s relation to man (Parts 2 and 3; Chein, 1972).

Third, toward the design of organizations that support and encourage greater variety in the pursuits of their members (Parts 4 and 6).

Fourth, a new perception of man in his environment (Part 5).

Fifth, approaching future studies not as a projected state of a closed system but as choices between alternative futures by purposeful people and their institutions (Parts 7 and 8; Mesarovic and Pestel, 1974; Ackoff, 1972).

Merely listing these trends tell us something else:  they are among the broadest trends to be observed in our societies over the past decade or so.

Introduction to Volume 1, First Edition

Introduction to Volume 1, Revised Edition

Some revisions have been made to this set of readings to complement better the new volume of readings, Systems Thinking, Volume 2.

The first two readings that were in Part Five have been dropped.  They served in the first edition to stimulate thought about where the frontiers of systems thinking would move in comparison to where they appeared to be in 1969.  Since then the frontiers have moved, and not in the generally expected directions (see the Introduction to Volume 2).  The frontiers in planning are now better seen in Readings 3 and 4 in Volume 2.  The frontiers in systems thinking about government is now much closer to where Stafford Beer and I see then in our papers at the end of Volume 2.

Reading 5 from Katz and Kahn has been dropped for reasons of space.  It is a very readable analysis of the distinction between open and closed systems.  The distinction is now little questioned, and, in face, in 1977 Prigogine received a Nobel Prize for this many years of work on the thermodynamics of open chemical systems.  [p. 21]

Reading 16, by Ackoff, has been dropped.  It was didactically relevant in 1969, and the state of the art then was still very much as he had described it in 1960.  His 1972 theoretical paper which replaces it (as the new Reading 17) was a significant contribution to the large shifts in systems thinking that took place in the seventies.  The new Reading 18, by Sachs, represents one of the more successful attempts to build on that operational base.  [pp 21-22]

[….]

The new Cartwright and Harary paper (Reading 12) spells out at some length what can be achieved with graph theory.  [….]

#systems-thinking

1968 Buckley, “Modern Systems Research for the Behavioral Scientist: A Sourcebook”

One book that I uncovered early in my systems sciences journal (circa 1998-1998) was a 1968 volume by Walter Buckley. In 2007, I had posted the contents of the book in the “Collections and Resources” section of the System Sciences Connections Conversations.

The content has been saved on the Internet Archive. I’m resurfacing it here (and adding some updates), so that search engines might pick up the contents again. It should be noted that the volume is a compendium of works that might be available elsewhere. The table of contents itself is worth browsing. (The original links on the Internet Archive are complemented by contemporary links).

In 2017, it looks like Routledge has republished the volume as an ebook. There’s a downloadable PDF of the table of contents, foreword and the first chapter (by Boulding). Strangely, ebook seems to only go to Chapter 47 … which might be overlooked if not for this transcription from the original 1968 hardcopy.


Walter Buckley (editor), Modern Systems Research for the Behavioral Scientist: A Sourcebook, Aldine Publishing Company, 1968.

There’s an entry on Walter F. Buckley on Wikipedia [contemporary link].  There is a “Walter Buckley Memorial Award for Excellence in Presenting Sociocybernetics” [contemporary link] sponsored by the RC51 Research Committee on Sociocybernetics, with a description of his contribution [contemporary link].  He was honoured in 1998 at the World Congress of Sociology [contemporary link].


Contents

Preface

Foreword, by Anatol Rapoport

General Introduction

Part I. General Systems Research: Overview

1. Kenneth E. Boulding, “General Systems Theory — The Skeleton of Science”

  • From Kenneth Boulding, “General Systems Theory — the Skeleton of Science,” Management Science, 2 (1956), 197-208.  Reprinted with the permission of Management Science and the author.

2. Ludwig von Bertalanffy, “General Systems Theory — A Critical Review”

  • From Ludwig von Bertalanffy, “General Systems Theory — A Critical Review,” General Systems, VII (1962) 1-20.  Reprinted with permission of the author and the Society for General Systems Research.

3. Norbert Wiener, “Cybernetics in History”

  • From Norbert Wiener, “Cybernetics in History,” The Human Use of Human Beings: Cybernetics and Society (Garden City, N.Y.: Doubleday Anchor, 1954), Chapter I.  Reprinted with permission of Houghton Mifflin Company.

Part II. Parts, Wholes, and Levels of Integration

4. Edward Purcell, “Parts and Wholes in Physics”

  • From Edward Purcell, “Parts and Wholes in Physics.” Reprinted with permission of The Free Press from Parts and Wholes, edited by Daniel S. Lerner.  Copyright 1963 by Massachusetts Institute of Technology.

5. K. M. Khalilov, “The Problem of Systemic Organization in Theoretical Biology”

  • From K. M. Khailov, “The Problem of Systemic Organization in Theoretical Biology,” translated by Anatol Rapoport from “Problema sistemnoi organizovannosti v teoreticheskoi biologii,” Zhurnal Obschchei Biologii, 24 (1963), 324-332, in General Systems, IX (1964), 151-157.  Reprinted by permission of the translator.

6. R. W. Gerard, “Units and Concepts of Biology”

  • From R. W. Gerard, “Units and Concepts of Biology,” Science, 125 (1957), 429-33.  Reprinted by permission of the author and Science.

7. Robert Redfield, “Levels of Integration in Biological and Social Systems”

  • From Robert Redfield, “Introduction,” in Robert Redfield (Ed.) Levels of Integration in Biological and Social Systems (Lancaster, Pa.: Jacques Catell Press, 1942), pp. 5-26.  Reprinted with permission of Jacques Catell Press.

Part III. Systems, Organization and the Logic of Relations

8. Anatol Rapoport and William J. Horvath, “Thoughts on Organizational Theory”

  • From Anatol Rapoport and Willian J. Horvath, “Thoughts on Organization Theory,” General Systems, 4 (1959), 87-91.  Reprinted by permission of the authors and the Society for General Systems Research.

9. V. I. Kremyanskiy, “Certain Peculiarities of Organisms as a ‘System’ from the Point of View of Physics, Cybernetics, and Biology”

  • From V. I. Kremyanskiy, “Certain Peculiarities of Organisms as a ‘System’ from the Point of View of Physics, Cybernetics, and Biology,” a translation of a Russian article preapred by U.S. Joint Publications Research Service.  Original publications in Voporsy Filosofii (Problems of Philosophy), August, 1958, pp. 97-107.  Translated from the Russian by Anatol Rapoport in General Systems, 5 (1960), 221-24.  Reprinted by permission of the translator and publisher.

10. A. D. Hall and R. E. Fagen, “Definition of System”

  • From A. D. Hall and R. E. Fagen, “Definition of System,” revised introductory chapter of Systems Engineering (New York: Bell Telephone Laboratories), reprinted from General Systems, I (1956), 18-28.  Reprinted by permission of the authors and Bell Telephone Laboratories.

11. Warren S. McCulloch and Walter H. Pitts, “A Logical Calculus of the Ideas Immanent in Nervous Activity”

  • Reprinted from The Bulletin of Mathematical Biophysics, 5 (1943), 115-33, with permission of the authors and editor.  To conserve space, the tentative mathematical sections II and III have been omitted.  For more recent and precise work in this area, see S. C. Kleene, “Representation of Events in Nerve Nets and Finite Automata,” in C. E. Shannon and J. McCarthy (Eds.), Automata Studies (Princeton, N.J.: Princeton University Press, 1956); and I. M. Copi, C. C, Elgot, and J. B. Wright, “Realization of Events by Logical Nets,” J. Assn Computing Machinery, 5 (1958), 181-96.

12. John von Neumann, “The General and Logical Theory of Automata”

  • From John von Neumann, “The General and Logical Theory of Automata, “in Lloyd A. Jeffress (Ed.), Cerebral Mechanisms in Beahvior: The Hixon Symposium (New York: John Wiley and Sons, 1951), pp. 1-2, 15-31.  Reprinted by permission of the author and publisher.  To conserve space, two sections unessential to von Neumann’s theory of automata have been omitted; they are entitled “Discussion of Certain Relevant Traits of Computing Machines” and “Comparisons between Computing Machines and Living Organisms.”  To complete the paper from which this selection is excerpted, the author attached this note:  “This paper is an only slightly edited version of one that was read at the Hixon Symposium on September 20, 1948, in Pasadena, California.  Since it was delivered as a single lecture, it was not feasible to go into as much detail on every point as would have been desirable for a final publication.  In the present write-up it seemed appropriate to follow the dispositions of the talk; therefore this paper, too, is in many places more sketchy than desirable.  It is to be taken only as a general outline of ideas and of tendencies.”

13. W. Ross Ashby, “Principles of the Self-Organizing System”

  • From W. Ross Ashby, “Principles of the Self-Organizing System,” in Heinz von Foerster and George W. Zopf (Eds.), Principles of Self-Organization (New York: Pergamon Press, 1962), pp. 255-78.  Reprinted by permission of the author and publisher.

Part IV. Information, Communication, and Meaning

14. George A. Miller, “What is Information Management?”

  • Reprinted from American Psychologist, 8 (1963), 3-11, with permission of the author and publisher.

15. W. Ross Ashby, “Variety, Constraint, and the Law of Requisite Variety”

  • From W. Ross Ashby, An Introduction to Cybernetics (London: Chapman and Hall, 1956), Chapter 7, pp. 123-134, and Chapter 11, pp. 202-209.  Reprinted with permission of the author and publisher.

16. Anatol Rapoport, “The Promise and Pitfalls of Information Theory”

  • From Anatol Rapoport, “The Promise and Pitfalls of Information Theory,” Behavioral Science, I (1956) 303-309.  Reprinted by permission of the author and publisher.

A. Entropy and Life

17. Erwin Schrödinger, “Order, Disorder and Entropy”

  • From Erwin Schrödinger, What Is Life? (Cambridge: Cambridge University Press, 1945), Chapter VI.  Reprinted by permission of the publisher.

18. L. Brillouin, “Life, Thermodynamics, and Cybernetics”

  • From L. Brillouin, “Life, Thermodynamics, and Cybernetics, ” American Scientist, 37 (October, 1949), 554-68.  Reprinted by permission of the author and publisher.

19. Richard C. Raymond, “Communication, Entropy and Life”

  • From Richard C. Raymond, “Communication, Entropy and Life,” American Scientist, 38 (April, 1950), 273-78.  Reprinted by permission of the author and publisher.

20. L. Brillouin, “Thermodynamics and Information Theory”

  • L. Brillouin, “Thermodynamics and Information Theory,” American Scientist, 38 (October 1950), 594-99.  Reprinted by permission of the author and publisher.

21. Mortimer Ostow, “The Entropy Concept and Psychic Function”

  • From Mortimer Ostow, “The Entropy Concept and Psychic Function,” American Scientist, 39 (1951), 140-44.  Reprinted by permission of the author and publisher.

22. Heinz von Foerster, “From Stimulus to Symbol: The Economy of Biological Computation”

  • From Heinz von Foerster, “From Stimulus to Symbol: The Economy of Biological Computation,” in Gyorgy Kepes (Ed.) Sign, Image, Symbol (New York: George Braziller, 1966).  Reprinted with permission from the author and publisher.

B. Behavior and Meaning

23. F. C. Frick, “The Application of Information Theory in Behavioral Studies”

  • Condensed from F. C. Frick, “Information Theory,” in Psychology:  A Study of a Science, Vol. 2, pp. 611-15, 629-36, edited by Sigmund Koch.  Copyright 1959 by McGraw-Hill, Inc.  Used by permission of the author and McGraw-Hill Book Co.

24. Charles E. Osgood, “A Behavioristic Analysis of Perception and Language as Cognitive Phenomena”

  • Reprinted by permission of the author and the publishers from Contemporary Approaches to Cognition: A Symposium Held at the University of Colorado (Cambridge, Mass.: Harvard University Press), pp. 75-118.  Copyright, 1957, by the Presidents and Fellows of Harvard College.

25. Donald M. MacKay, “The Informational Analysis of Questions and Commands”

  • From D. M. MacKay, “The Informational Analysis of Questions and Commands,” in Colin Cherry (Ed.), Information Theory: Fourth London Symposium (London: Butterworth’s, 1961).  Reprinted by permission of the author and publisher.

26. Russell L. Ackoff, “Towards a Behavioral Theory of Communications”

  • From Russell L. Ackoff, “Towards a Behavioral Theory of Communications,” Management Science, 4 (1957-58), 218-34.  Reprinted by permission of the author and publisher.

Part V. Cybernetics: Purpose, Self-Regulation and Self-Direction

A. Cybernetics and Purpose

27. Arturo Rosenblueth, Norbert Wiener, and Julian Bigelow, “Behavior, Purpose and Teleology”

  • From Arturo Rosenblueth, Norbert Wiener, and Julian Bigelow, “Behavior, Purpose and Teleology,” Philosophy of Science, 10 (1943), 18-24.  Copyright 1943, The Williams and Wilkins Co., Baltimore, Md. 21202, U.S.A.  Reprinted by permission.

28. Richard Taylor, “Comments on a Mechanistic Conception of Purposefulness”

  • From Richard Taylor, “Comments on a Mechanistic Conception of Purposefulness,” Philosophy of Science, 17 (1950), 310-17.  Copyright 1950, The Williams and Wilkins Co., Baltimore, Md. 21202, U.S.A.  Reprinted by permission.

29. Arturo Rosenblueth and Norbert Wiener, “Purposeful and Non-Purposeful Behavior”

  • From Arturo Rosenblueth and Norbert Wiener, “Purposeful and Non-Purposeful Behavior,” Philosophy of Science, 17 (1950), 318-26.  Copyright 1950, The Williams and Wilkins Co., Baltimore, Md. 21202, U.S.A.  Reprinted by permission.

30. Richard Taylor, “Purposeful and Non-Purposeful Behavior: A Rejoinder”

  • From Richard Taylor, “Purposeful and Non-Purposeful Behavior: A Rejoinder,” Philosophy of Science, 17 (1950), 327-32.  Copyright 1950, The Williams and Wilkins Co., Baltimore, Md. 21202, U.S.A.  Reprinted by permission.

31. C. W. Churchman and R. L. Ackoff, “Purposive Behavior and Cybernetics”

  • From C. W. Churchman and R. L. Ackoff, “Purposive Behavior and Cybernetics,” Social Forces, 29, 1 (October, 1950), 32-39.  Reprinted by permission of the authors and The University of North Carolina Press.

32. Omar K. Moore and Donald J. Lewis, “Purpose and Learning Theory”

  • From Omar K. Moore and Donald J. Lewis, “Purpose and Learning Theory,” Psychological Review, 60 (May, 1953), 149-56.  Reprinted with permission of the authors and American Psychological Association.

B. Homeostatis and Evolution

33. Walter B. Cannon, “Self-Regulation of the Body”

  • Reprinted from The Wisdom of the Body by Walter B. Cannon, by permission of W. W. Norton & Company, Inc.  Revised and enlarged edition copyright 1939 by Walter B. Cannon.  Copyright renewed 1960 by Cornelia J. Cannon.

34. J. W. S. Pringle, “On the Parallel between Learning and Evolution”

  • From J. W. S. Pringle, “On the Parallel between Learning and Evolution,” Behaviour, 3 (1951), 174-215.  Reprinted by permission of the author and E. J. Brill Ltd., Publishers, Leiden.

35. G. Sommerhoff, “Purpose, Adaptation and ‘Directive Correlation'”

  • From G. Sommerhof, Analytical Biology (London: Oxford University Press, 1950), Chapter II.  Reprinted with permission of the Clarendon Press, Oxford.

36. W. Ross Ashby, “Regulation and Control”

  • From W. Ross Ashby, An Introduction to Cybernetics (London: Chapman and Hall, 1956), Chapter 10, pp. 195-201, and Chapter 11, pp. 209-218.  Reprinted with permission of the author and Chapman & Hall.  The reader should recall Chapter 15 above reprinting earlier sections from this work that are important for the present discussion.

37. Magoroh Maruyama, “The Second Cybernetics: Deviation-Amplifying Mutual Causal Processes”

  • From Magoroh Maruyama, “The Second Cybernetics: Deviation-Amplifying Mutual Causal Processes,” American Scientist, 51 (1963), 164-79.  Reprinted by permission of the author and publisher.

Part VI. Self-Regulation and Self-Direction in Psychological Systems

38. Charles W. Slack, “Feedback Theory and the Reflex Arc Concept”

  • From Charles W. Slack, “Feedback Theory and the Reflex Arc Concept,” Psychological Review, 62 (1955), 263-67.  Reprinted by permission of the author and the American Psychological Association.

39. Richard Held and Sanford J. Freedman, “Plasticity in Human Sensorimotor Control”

  • From Richard Held and Sanford J. Freedman, “Plasticity in Human Sensorimotor Control,” Science, 142, (25 October 1963), 455-61.  Copyright 1963 by the American Association for the Advancement of Science.  Reprinted by permission of the author and publisher.

40. Tamotsu Shibutani, “A Cybernetic Approach to Motivation”

  • Published originally in this volume.

41. O. H. Mowrer, “Ego Psychology, Cybernetics, and Learning Theory”

  • From O. H. Mowrer, “Ego Psychology, Cybernetics, and Learning Theory,” in Donald K. Adams et al. (Eds.), Learning Theory and Clinical Research (New York: John Wiley, 1954), pp. 81-90.  Reprinted by permission of the author and publisher.

42. Gordon W. Allport, “The Open System in Personaltiy Theory”

  • From Gordon W. Allport, “The Open System in Personaltiy Theory,” Journal of Abnormal and Social Psychology, 61 (1960), 301-11.  Reprinted by permission of the author and publisher.

43. Joseph N. Notterman and Richard Trumbull, “Note on Self-Regulating Systems and Stress”

  • From Joseph N. Notterman and Richard Trumbull, “Note on Self-Regulating Systems and Stress”, Behavioral Science, 4 (October, 1950), 324-27.  Reprinted by permission of the authors and publisher.

44. Geoffrey Vickers, “The Concept of Stress in Relation to the Disorganization of Human Behaviour”

  • From Geoffrey Vickers, “The Concept of Stress in Relation to the Disorganization of Human Behaviour,” in J. M. Tanner (Ed.), Stress and Psychiatric Disorder (Oxford: Blackwell Scientific Publications, Ltd., 1959), pp. 3-10.  Reprinted by permission of the author and publisher.

45. Donald M. Mackay, “Towards an Information-Flow Model of Human Behaviour”

  • From Donald M. Mackay, “Towards an Information-Flow Model of Human Behaviour,” British Journal of Psychology, 47 (1956), 30-43.  Reprinted by permission of the author and publisher.

46. George A. Miller, Eugene Galanter, and Karl H. Pribram, “Plans and the Structure of Behaviour”

  • From George A. Miller, Eugene Galanter, and Karl H. Pribram, Plans and the Structure of Behaviour (New York: Holt, Reinhart & Winston, 1960), Chapter 2 and 4.  Reprinted by permission of the authors and publisher.

Part VII. Self-Regulation and Self-Direction in Sociocultural Systems

47. Karl W. Deutsch, “Toward a Cybernetic Model of Man and Society”

  • From Karl W. Deutsch, “Some Notes on Research on the Role of Models in the Natural and Social Sciences,”  Synthese, 7 (’48-’49), 506-33.  Reprinted with permission of the author and D. Reidel Publishing Co.

A. Social Control: Internal Variety and Constraints

48. S. F. Nadel, “Social Control and Self-Regulation”

  • From S. F. Nadel, “Social Control and Self-Regulation,” Social Forces, 31 (March, 1953), 265-73.  Reprinted by permission of The University of North Carolina Press.

49. Roger Nett, “Conformity-Deviation and the Social Control Concept”

  • Reprinted from Roger Nett, “Conformity-Deviation and the Social Control Concept,” Ethics, 64 (1953), 38-45, by permission of the author and the University of Chicago Press.  Copyright 1953 by the University of Chicago Press.

50. Roger Owen, “Variety and Constraint in Cultural Adaptation”

  • Revised version of a paper read at the 62nd Annual Meeting of the American Anthropological Association, November 21, 1963, San Francisco, California; originally titled “The Social Demography of Northern Baja California: Non-linguistically Based Patri-local Bands.”  With permission of the author.

51. Leslie T. Wilkins, “A Behavioural Theory of Drug Taking”

  • From Leslie T. Wilkins, “A Behavioural Theory of Drug Taking,” Howard Journal, Vol. XI, No. 4 (1965), pp. 6-17.  Reprinted by permission of the author and publisher.

B. Social Control: Organizational Goal Seeking

52. David Easton, “A Systems Analysis of Political Life”

  • From David Easton, A Systems Analysis of Political Life (New York: John Wiley, 1965), Chapter 2, pp. 17-35.  Reprinted by permission of the author and publisher.  Copyright 1965 by John Wiley & Sons, Inc.

53. Mervyn L. Cadwallader, “The Cybernetic Analysis of Change in Complex Social Organizations”

  • Reprinted from Mervyn L. Cadwallader, “The Cybernetic Analysis of Change in Complex Social Organizations,” American Journal of Sociology, 65 (1959), 154-57, by permission of The University of Chicago Press.  Copyright 1959 by The University of Chicago Press.

54. Kurt Lewin, “Feedback Problems of Social Diagnosis and Action”

  • From Kurt Lewin, “Frontiers in Group Dynamics,” Part II-B, Human Relations, I (1947), pp. 147-53.  Reprinted by permission of Tavistock Publications Ltd.

55. Chadwick J. Haberstroh, “Control as an Organizational Process”

  • Chadwick J. Haberstroh, “Control as an Organizational Process,” Management Science, 6 (January, 1960), 165-71.  Reprinted by permission of the author and publisher.

56. Garrett Hardin, “The Cybernetics of Competition: A Biologist’s View of Society”

  • Reprinted from Garrett Hardin, “The Cybernetics of Competition: A Biologist’s View of Society,” Perspectives in Biology and Medicine, VII (Autumn, 1963), 61-84, by permission of the University of Chicago Press.  Copyright 1963 by the University of Chicago Press.

57. Geoffrey Vickers, “Is Adaptability Enough?”

  • From Geoffrey Vickers, “Is Adaptability Enough?” Behavioral Science, 4 (1959), 219-34.  Reprinted by permission of the author and publisher.

C. Decision Processes and Group Structure

58. Anatol Rapoport, “Critiques of Game Theory”

  • From Anatol Rapoport, “Critiques of Game Theory,” Behavioral Science, Vol. 4 (1959), 49-66.  Reprinted by permission of the author and publisher.

59. Walter Buckley, “Society as a Complex Adaptive System”

  • Many of the ideas expressed here appear in more extended form in the author’s Sociology and Modern Systems Theory (Englewood Cliffs, N.J.:  Prentice-Hall, 1967).

Selected References

Index

Book cover:  Systems Research for Behavioral Science: A Sourcebook, Walter Buckley, editor

#behavioral-science, #general-systems-theory, #organization-science, #systems-thinking

Wholism, reductionism (Francois, 2004)

Proponents of #SystemsThinking often espouse holism to counter over-emphasis on reductionism. Reading some definitions from an encyclopedia positions one in the context of the other (François 2004).

–begin paste —

1560
HOLISM 1) – 3)
“A descriptive and investigative strategy which seeks to find the smallest number of explanatory principles by paying careful attention to the emergent properties of the whole, as opposed to the behavior of the isolated parts, as chosen by the observer in a reductionist strategy” (T.F.H. ALLEN & T.B. STARR, 1982, p.270).

The term and the concept were introduced in 1926 by the South African general and statesman Jan SMUTS. The term was derived from the Greek: “holos” = whole.

SMUTS wrote: “The idea of wholes and wholeness should… not be confined to the biological domain: it covers both inorganic substances and the highest manifestations of the human spirit. Taking a plant or an animal as a
type of a whole, we notice the fundamental holistic character as a unity of parts which is so close and intense as to be more than the sum of its parts; which not only gives a particular conformation or structure to the parts, but so relates and determines them in their synthesis that their functions are altered; the synthesis affects and determines the parts, so that they function towards the whole; and the whole and the parts therefore reciprocally influence and determine each other, and appear to merge more or less their individual characters: the whole is in the parts and the parts are in the whole, and this synthesis of whole and parts is reflected in the holistic character of the functions of the parts as well as of the whole” (1926-1973, p.86).

M. BUNGE describes as follows the characteristic theses of holism, “the ontological view that stresses the integrity of systems at the expense of their components and the mutual actions among them”:

“1. The whole precedes its parts.
“2. The whole acts on its parts.
“3. The whole is more than the sum of its parts.
“4. Wholes emerge under the action of agents that transcend both the actions among the components and the environmental influences.
“5. Totalities cannot be explained by analysis: they are irrational.
“6. The whole is better than any of its parts” (1979, p.39-40).

BUNGE sharply criticizes these thesis, reproduced however by him in a somewhat caricatural form (see hereafter)

According to ALLEN and STARR, both holism and reductionism seek to explain emergent behavior by invoking a lower level of organization (p.270).

Thus, both strategies admit the existence of hierarchies in systems.

J .A. GOGUEN and F.J. VARELA however observe: “Most discussions place holism/reductionism in polar opposition. This seems to stem from the historical split between empirical sciences viewed as mainly reductionist or analytic, and the (European) schools of philosophy and social sciences that grope toward a dynamics of totalities” (1979, p.40).

However: “It seems that both these directions of analysis always coexist, either implicitly or explicitly, because these descriptive levels are mutually interdependent for the observer. We cannot conceive of components if there is no system from which they are abstracted, and there cannot be a whole unless there are constitutive elements” (p.41).

These authors give the excellent example of harmony and melody, which are at”… a level of organization above that of the notes them selves” (Ibid).

Finally: “Reductionism implies attention to a lower level, while holism implies attention to a higher level. These are intertwined in any satisfactory description; and each entails some loss relative to our cognitive preferences, as well as some gain” (p.42).

M. BUNGE holds a dim view on holism, which he carefully distinguishes from systemics, as holism “recognizes the existence of systems with specific characters (emergent properties), but treats them as totalities or black boxes“. According to him holism “… refuses to analyse them and to explain the formation and the collapse of wholes in function of their components and the interactions between them” (1995, p.16). He also indicts holism as “responsible for the backwardness of the non-physical sciences. It has contributed precious little to serious systemics, precisely because: (a) it has not engaged in the study of the links that hold any system together, and (b) rather than constructing conceptual systems (theories) to account for concrete systems, it has spent itself in attacking analytical or atomistic approach and praising totality as such. Whatever truth there is in holism – namely that there are totalities, that they have properties of their own, and they should be treated as wholes – is contained in systemism, or the philosophy underpinning systemics” (p.410).

The most equilibrated view has been offered by G. KLIR who considers that present systems thinking: “… represents a synthesis of the reductionistic thesis and the holistic antithesis” (1993, p.36).

The French philosopher B. PASCAL anticipated (1670!) this view: “I consider impossible to obtain knowledge of the parts without knowing the whole, nor to know the whole without particular knowledge of the parts” (Quoted by R. VALLEE, 1995, p.11).

The concept of Gestalt, as refering to the perception of wholes, also is another conspicuous root of holism.

Some view holism itself as a kind of reductionism. K. BAUSCH for ex. defines it as “A reductionist descriptive and investigative strategy for generating explanatory principles of whole systems”(Glossary, Pers.comm., 2002)

Indeed: “Attention is focused on the emergent properties of the whole rather than on the behavior of the isolates parts”(lbid).

Of course, holistic models should be paired with classical reductionist ones as both aspects are complementary and necessary for comprehensive explanations.

As a very simple illustration, while H20 has expecific properties as a whole, its constitution can be understood only by knowing the chemical and physical characteristics of Η and 0 that allow them to combine.

— end paste —

With holism on one side, reductionism is on the other.

— begin paste —

2771
REDUCTIONISM 1) – 3)
1. “A descriptive and investigative strategy which gives account of phenomena in terms of a series of isolated parts, coupled together by direct causal linkages” (T.F.H. ALLEN & T.B. STARR,.1, p.276).

2a. A principle “according (to which) all scientific concepts are reducible to a set of ultimately irreducible concepts” (R.L. ACKOFF, 1974, p.53).

2b. “The belief that everything in the world and every experience of it can be reduced, decomposed, or dissembled to ultimatly simple elements, indivisible parts” (R.L. ACKOFF, 1991, p.325).

3. The claim “that properties of a whole are explicable in terms of properties of the constituent elements” (G. KLIR, 1991, p.24).

4. “… the task to find the simplest, most economical and (usually) most elegant explanation that will cover the known data” (G. BATESON, 1979, p.230).

T.F.H. ALLEN and T.B. STARR add: “Ambiguity in relationship between parts is met with further subdivision until the ambiguity disappears” (Ibid).

As a result “Reductionism is thus the strongest possible way of ordering the list of the various sciences” (I.I. MITROFF and H.A. LINSTONE, 1993, p. 165).

Commenting his definition, ACKOFF states that these “ultimate concepts”, according to some “were provided by direct observation” and to others “thought of as undefined concepts of a formal system”. And, “Whatever
their source, these concepts were identified as “physical thing predicates”; that is, physical properties of things”.

BATESON adds to his definition the following caveat: “Beyond this, reductionism becomes a vice if it is accompanied by an overly strong insistence that the simplest explanation is the only explanation. The data may have to be understood within some larger gestalt” (Ibid).

For a good example, see: Selection (Multi-level)

Reductionism is a paradigm. R. ROSEN expresses this as follows: “The belief that any natural system can be so decomposed (note: i.e. extensively, and possibly limitlessly), and that the laws governing the motions of these
particles can be determined, is the essence of reductionism” (1979, p.174). The “belief” is something like the hardening of an assumption.

The conceptual origins of reductionism are to be found in DESCARTES and NEWTON, if we are not going back in time to DEMOCRITUS, EPICURUS and LUCRETIUS.

The basic (and largely unconscious) tenets of reductionism seem to be the following:

  • what should be researched are the properties of objects, which are “real” and perfectly knowledgeable
  • the observer or experimenter is “transparent”, i.e., without influence on the perceived objects
  • the “et ceteris paribus” postulate can be applied without restrictions, which allows for linear causal explanations
  • supposedly, no significant aspects of phenomena are thus left out, (forgetting for example interrelations between elements or parts, or between levels)
  • larger contexts do not influence upon phenomena

Systems concepts are however not opposed to the reductionist approach, as they admit levels of description. This point was made as follows by P. WEISS (quoted by G. KLIR, 1991, p.26), who describes a neutral stand: “The reductionist likes to move from the top down, gaining precision of information about fragments as he descends, but losing information content about the larger orders he leaves behind: The other proceeds in the opposite direction, from below, trying to retrieve the lost information content by reconstruction, but recognizes that that information is not forthcoming unless he already had it on record in the first place. The difference between the two processes, (is) determined partly by personal
predilection, but largely also by historical tradition”.

This does not makes clear that the main concern of the systemist are the interactions within the same level as well as between different levels of complexity, in the whole.

Moreover, as stated by I. PRIGOGINE (quoted by F. David PEAT, 1987, p.64), “… there is no “fundamental level” in nature but rather each level involves its unique description and is conditioned by the levels around it”.

In some sense, reductionism is thus “in scribed” within the systemic approach as: “Nature… requires pluralistic descriptions and … this pluralism must contain both causal and synchronistic aspects” (Ibid).

In an overview of this subject, J.A. GOGUEN and F.J. VARELA conclude: “Reductionism implies attention to a lower level, while holism implies attention to a higher level. They are intertwined in any satisfactory description; and each entails some loss relative to our cognitive preferences, as well as some gain” (1979, p.42).

Reductionism may also be understood as the use of OCKAM’s razor, as stated by P. CHECKLAND, who writes: “And we use reductionism in another sense in explanation, explaining the results using the smallest number possible number of concepts, only importing more elaborate concepts when defeated in this” (1976, p.128).

Still another meaning frequently given to reductionism is the stand according to which social and biological sciences can ultimately be reduced to explanations based on physical sciences. While anything social or biological is by necessity based on physical properties, the reductionist stand ignores the existence of succesive levels of complexity grounded on the emergence of dissipative structures and synergies.

On this topic D. CAMPBELL accepts:”… the limited emergentist principle that laws of biology, psychology and sociology exist which are not described by the laws of physics and inorganic chemistry. These emergent laws are compatible with the laws of physics and chemistry but not derivable from them” (1975, p.1104).

At long last, the fundamental difficulty becomes obvious: so-called laws must be non-contradictory between them to be acceptable on all levels. The best example has been the elimination of the vitalist stand in biology when the appearent impossibility of life in terms of classical thermodynamics was taken care of succesively by such models as open system, homeostasis and finally dissipative structuration. It should be noted that none of these models are “laws”, nor their applications limited to any particular scientific discipline.

P.M. ALLEN et al. observe that “the very success of reductionist science has… provided man with the power to radically change his environment. However, this science has afforded almost no knowledge of the probable effects of such actions in the complex systems encountered in the domains of biology, ecology and socio-economics. Policy today must be formulated in a world of ever increasing interaction and complexity” (1984, p.2).

— end paste —


The numbers beside the encyclopedia entry mean …

  • The following special markers have been used, in order to enhance the usefulness of the encyclopedia:
  • 1) meaning “systemic on a wide range”, or “general information”
  • 2) meaning “general abstract or mathematical model”, or “methodology”
  • 3) meaning “epistemologica! or ontological aspects”, or “semantics”
  • 4) meaning “practical in human sciences”
  • 5) meaning “more specific or disciplinarian”

In this paper-first encyclopedia, the bolded text is link to other entries.

Reference

François, Charles, ed. 2004. International Encyclopedia of Systems and Cybernetics | 2nd ed. De Gruyter Saur. https://doi.org/10.1515/9783110968019.

International Encyclopedia of Systems and Cybernetics

#holism, #reductionism

It matters (word use)

Saying “it doesn’t matter” or “it matters” is a common expression in everyday English. For scholarly work, I want to “keep using that word“, while ensuring it means what I want it to mean.

The Oxford English Dictionary (third edition, March 2001) has three entries for “matter”. The first two entries for a noun. The third entry is for “matter” as a verb. From that, the first two definitions aren’t what I’m looking for.

I. Senses relating to physical matter or substance.

II. Senses relating to material content.

The third definition of interest, as a verb, dates back into the 1500s.

III. Senses relating to significance or import. 

3.intransitive. To be of importance; to signify. Usually in interrogative and negative contexts.

 a. With non-referential it as subject or impersonal with adverbial what, and with complementary subordinate clause. Also with clause as subject. (In quot. 1817   used transitively with indirect object.)

[…]
1591   H. Savile tr. Tacitus Ende of Nero: Fower Bks. Hist. iv. 202   Sosianus and Sagitta were men vile and of no account, neither mattered it where they liued. [….]

1630   Bp. J. Hall Occas. Medit. §xiii   It matters not, O God, how I am vexed here below a while.

1651   T. Hobbes Philos. Rudim. vii. 122   Nor matters it, that he hath perhaps made any promise to assemble his Subjects on some certain times. [….]

1878   Ld. Tennyson Revenge xi   We die—does it matter when? [….]

1908   L. M. Montgomery Anne of Green Gables xii. 119   If she doesn’t like you it won’t matter how much Diana does.1920   

D. H. Lawrence Women in Love xxx. 511   Does it matter, whether I drink white wine this evening, or whether I drink nothing? [….]

 b. Without complementary clause.

1567   T. Drant tr. Horace Arte of Poetrie sig. Bij   I might or this haue written noble geare But that from collor, I am purgd at springe tyme euery yeare. It matters not. [….]

1846   C. Dickens Dombey & Son (1848) iii. 23   ‘Miss Florence was afraid of interrupting, Sir,..’ said Richards. ‘It doesn’t matter,’ returned Mr. Dombey. […]

 c. With a thing (material or immaterial) as subject.

c1683   Wodrow MS in C. K. Sharpe Hist. Acct. Belief Witchcraft Scotl. 157   To this answered, ‘That matters not tho’ it were the night before the morn, if they go to heaven.’ [….]

1935   G. Greene in Spectator 9 Aug. 222/2   What matters is the witty dialogue, the quick intelligent acting of Mr. Tone and Miss Merkel. [….]

 d. Of a person: to be important, have influence.

1848   W. M. Thackeray Vanity Fair lxiv. 589   ‘Oh, it was Madame de Belladonna, was it?’ Becky said… ‘No—she does not matter—she is always jealous.’

1909   H. W. C. Newte Sparrows xl. 505   With your appearance and talents you should be a great social success with people who matter. [….]

#etymology

Systemic Change, Systematic Change, Systems Change (Reynolds, 2011)

It’s been challenging to find sources that specifically define two-word phrases — i.e. “systemic change”, “systematic change”, “systems change” — as opposed to loosely inferring reductively from one-word definitions in recombination. MartinReynolds @OpenUniversity clarifies uses of the phrases, with a critical eye into motives for choosing a specific label, as well as associated risks and traps.

Working from the end of the paper towards the beginning, the conclusion points “towards a critical systems literacy”.

— begin paste —

4.2 Towards a critical systems literacy

[….] Using our own form of systems literacy, systems boundaries (the domain of systems change) are subject to systematic changes invoked by the designers and users of systems, and systemic changes invoked by those subject to the use of systems. There is here a triadic interplay between three perpetual factors –

  • systems with their boundaries,
  • people and their values, and
  • real world entities and events in the factual domain.

The relationship between them can be expressed in terms of either an entrapped vicious circle or a liberating virtuous cycle.

The three types of trap noted above represent responses to particular types of well- founded anxiety and fear with managing complex issues. There is

  • the continual fear of systemic uncertainty in unforeseen events and unintended consequences,
  • the fear of losing or even reinforcing excessive systematic control, and
  • the fear of change in systems; an undue ultimate optimism in old or new systems.

Table 4 summarises these traps in terms of contributing towards a critical literacy of systems thinking in practice.

Type of
change
Location of
change
Primary
intent
Risks or
traps
Some key
vocabulary
SystemicComplex
realities or
situation
Make simple &
manageable the
complex web of
realities for
improving
situations
Seeing a mess as simple
problem-solving i.e.,
reductionist thinking
rather than as
improvement resolution.
Complexity
Feedback
Emergence
Uncertainty
Autonomy
SystematicStakeholdersDeveloping mutual
understanding and
shared practice
Fixing people as objects
for purposive endeavours
rather than as purposeful
subjects.
Perspectives
Praxis
Learning
Stakeholding
SystemsConceptual
worlds
Improvement of
situations
and emancipation
through reflective
practice
Complacency and
obsession with ‘systems’
e.g., as holistic devices,
rather than as temporary
pragmatic constructs
Judgements
Boundaries
Reframing
Critique
Table 4. Features of a critical systems literacy

A key intent of systems thinking associated with systems change is to continually question boundaries of our conceptual constructs with a primary focus on improving the situation. That is, with a focus on steering good systemic change.

— end paste (with editorial paragraphing added) —

Comment: Backing up through the paper gives us some stronger definitions and understanding, gained through applying systems thinking in field research. A challenge with espoused systems thinkers, though, is recognizing those with critical eye, as distinct from those with a hammer looking for a nail.

— begin paste —

4. Implications of a ‘Critical’ Systems Thinking in Practice

[….] If systems thinking in practice provides such a potentially powerful agent of change, what is that may inhibit such change? In the practical domain of engaging with different perspectives, the fear for change is manifest in the traps of uncritical thinking that pervade our everyday practices. Aligned with these traps is an unclear use of language around systems thinking. What precisely is meant by the terms systemic, systematic and system and how might such terms be more meaningfully incorporated in to a critical systems literacy?

— end paste —

Comment: Firstly, let’s understand what’s usually behind systemic change.

— begin paste —

Trap 1: Silo Problem-Solving: Towards Anticipating Systemic Change

[….] The conventional functionalist systems idea of organisation – a whole consisting of related parts contributing to a particular function – has contributed considerably to a reification of this type of silo thinking. Organisations are typically organised with departmental terms of reference carrying clearly defined remits for employees. The idea is neat, easy to work with in terms of providing some assurance of certainty, or at least lack of ambiguity, and most importantly, as suggested above, comfortable. Comfort is conventionally drawn from some basic (mis)understanding about organisations working as self-contained functional systems, the output of which is unquestionably some ‘good’ for the wider community. It pervades many impressions of organisations whether small and simple or large and complex. [….]

A systemic issue comprises complexity, uncertainty, interdependencies and controversy involving a wide range of variables requiring resolution. A technical problem on the other hand bounded by a fixed bounded silo occupies the more comfortable domain, amenable to a solution, usually provided by a traditional ‘expert’. Characteristics of issues are troublesome! They can sometimes distract from getting things done. But can they be ignored?

The trap of silo thinking is based upon the idea that such issues can be ignored. It is associated with reductionism. A critical perspective on systems acknowledges that, to use a famous systems adage, a system is merely a map of a situation or territory, not to be confused with the actual territory. Real world complexities represent something that exists outside of any one conceptualisation of context. The real world complexity provides the site for systemic change. In terms of a systems literacy, the tension between system and situation might be appreciated in terms of a conversation. The distinction between thinking about systems and systems thinking is helpful in clearing ground between systems thinking and related disciplines associated with systems sciences (e.g., complexity and chaos theory). It respects rather than struggles against two different perceptions of ‘systems’: one, as with systems thinking, an epistemological construct; the other, as with systems sciences, more an ontological entity.

A key underplayed intent of systems thinking associated with systemic change is to make simple the complex web of interrelationships and interdependencies in a transparent (and thereby questionable) manner. In short, systems thinking about systemic change involves a continual conversation between ‘systems’ and ‘situations’; a tension expressed through the act of making simple the complex – a tension that invites more an artistic rather than scientific literacy. This is not to deny the importance of a scientific literacy promoting more detailed understanding in terms of, say, evolutionary science, chaos theory and complexity sciences, but the craft of systems thinking is primarily geared towards making manageable the complex. The task involves using a language that is accessible to all stakeholders.

— end paste —

Comment: The the above view on systemic change uses human organization as the system of interest. Is is possible that systemic change might apply to some other types of system? Well, given that it’s human beings that frame the systems of interest, if the system isn’t anthropocentric, we could argue that systems don’t exist in reality. It’s just trying to make sense of ourselves, and nature. (Having nature make sense of nature seems strange, philosophically).

Comment: Secondly, the article turns to systematic change.

— begin paste —

Trap 2: Fixing People: Towards Purposeful Systematic Change

The trap of ‘fixing people’ into pre-designed purposes – ‘purposive management’ – is based upon the misguided behaviourist idea that different purposes from different perspectives can be moulded into a consensual purpose. The story of failure in organizational change projects, […], in contrast, suggests alternative strategies based upon working with people/ stakeholders rather than working on them. The trap here is related to the trap of dogmatism. Systemic failure in many situations can often be associated with the dogmatic disregard of other perspectives that inform the situation.

The literacy called for requires not just simplifying realities for individual comprehension but making sense of realities for mutual understanding amongst stakeholders involved in a situation in order to foster shared practice. This second aspect of a systems literacy speaks to the human dimension of intervention. As such it speaks of systematic change; change directed by human agents. The term ‘systematic’ relates to an inevitable requirement of orderliness. Our means of communication through language and discourse requires levels of systematisation to a greater or lesser extent so as to generate some sense of mutual understanding. [….]

Social learning, like Theory Y, invokes a proactive engagement amongst stakeholders in systematically managing change. The idea moves away from implementation modelled on hierarchical notions of working on people – restructuring, reconfiguring, re-engineering – and then dealing with inevitable subsequent resistance amongst stakeholders, towards a more collective notion of working with people – stakeholding development. [….]

Conventional systematic change is purposive. This involves a linear application of tools to serve a prescribed purpose. In contrast, purposeful systematic change involves use of language, amongst other tools, for iterating on better revised goals based on improved understanding and better practice.

— end paste —

Comment: Systematic change is related to working hierarchically on people/stakeholders rather than socially working with people/stakeholders who might (or might not) otherwise learn by themselves. Orderliness suggests more of a mechanistic view of organizations of human beings, when self-organization is not left to chance.

Comment: The distinction between purposive and purposeful dates back to Russell Ackoff, all the way to his dissertation research. Here’s a brief summary. A group can be purposive, sharing a goal over a planning period (e.g. until a project is done). If the group is purposive, they share an ideal beyond a planing period (e.g. an aesthetic, moral or ethical pursuit).

Comment: In the third of three parts, systems change is seen as a moving to a “new” system, as an alternative to maintaining the old system. This leads to questions about holism (i.e. what is the whole?) as well as pluralism (i.e. who, or how many, get to decide what is real and what isn’t), that leads to boundary critique (i.e. what and who are inside/outside a system).

— begin paste —

Trap 3: Maintaining Systems or ‘Systems’ Obsession: Toward Meaningful Systems Change

[….] Continually adopting ‘new’ systems runs the risk of elevating the notion of ‘system’ to a fetish status; celebrating the very notion of system as being the panacea for crises. Systems are often referred to in association with new developments – miraculous ways of doing things.

The trap of systems maintenance, or being obsessive with the tools we construct, lies in reifying and privileging the ‘system’ – whether it’s old or new – as though it has some existence and worth outside of the user and some status beyond its context of use in enabling change. […]

There are many … ‘systems’ that … entrap our understanding and practice. A generic term for these is ‘business as usual’ (BAU). Examples include the annual cycles of organisational planning, target setting, budgeting, the development of performance indicators and performance related pay incentives etc. BAU models maintain existing ‘systems’ principally because of a fear for change. But the fear is not evenly distributed amongst all stakeholders. Some fear change more than others simply because the system works in a partial manner. The system works for some and not for others.

All systems are partial. They are necessarily partial – or selective – in the dual sense of (i) representing only a section rather than the whole of the total universe of considerations, and (ii) serving some parties – or interests – better than others (Ulrich 2002 p. 41). In other words, no proposal, no decision, no action, no methodology, no approach, no system can get a
total grip on the situation (as a framework for understanding) nor get it right for everyone (as a framework for practice) (Reynolds, 2008a).

[….] the two dimensions of partiality respond to the two transitions implicit in systems thinking about systems change; one, towards holism, and another towards pluralism. Given the partiality of systems a third critical dimension is required where systems boundaries inevitably need to be made and questioned on the inevitable limitations of being holistic and pluralistic.

— end paste —

Comment: Referring to Table 4, the primary intent of systems change can include emancipation, surfacing voices that aren’t heard in the way the current system operates. The risk or trap with systems change could then be potentially “throwing out the baby with the bathwater”, by introducing a “new” system that replaces an “old” systems that may have been dysfunctional (to a greater or lesser degree), but not broken.

Comment: Now having covered most of the article backwards, readers who are unfamiliar with the Critical Systems Thinking literature may want to start from the beginning of the paper for an orientation and summary.

Figure 2. Critical systems framework illustrating systems thinking in practice activities.

The Open University group for Applied Systems Thinking in Practice is one of the most venerable in the systems movement. The resources available online are foundational in their teaching.

References

Reynolds, Martin. 2011. “Critical Thinking and Systems Thinking: Towards a Critical Literacy for Systems Thinking in Practice.” In Critical Thinking, edited by Christopher P. Horvath and James M. Forte, 37–68. New York, USA: Nova Science Publishers. https://www.novapublishers.com/catalog/product_info.php?products_id=20176. Released on Open Research Online at http://oro.open.ac.uk/30464/

#systematic-change, #systemic-change, #systems-change, #systems-thinking

Environmental c.f. ecological (Francois, 2004; Allen, Giampietro Little 2003)

The term “environmental” can be mixed up with “ecological”, when the meanings are different. We can look at the encyclopedia definitions (François 2004), and then compare the two in terms of applied science (i.e. engineering with (#TimothyFHAllen @MarioGiampietro and #AmandaMLittle, 2003).

Delimiting a system at least partially defines an environment.

–begin paste —

1120
ENVIRONMENT 1)
1. “The context in which a system exists” (1973, p.86).

Β. BANATHY states: “It is composed of all the things that surround the system, and it includes everything that may affect the system and that may be affected by the system” (Ibid).

Of course, the whole universe is the environment of any system. Practically we must thus define in a more restricted way the environment of the system as those parts of the general environment that interact with it more or less strongly and/or permanently. However, there is a snag (or two):

a – Two different observers may very well have different descriptions of the system’s environment (pursuing either the same- or a different goal)

b – There is never any absolute certainty that the observer did not forget, or be unable to register some important part of the environment.

An incorrect or incomplete evaluation of the environment is the main cause of most of the technological and social catastrophes engineered nowadays by ill-advised planners.

2.”Those variables whose changes affect the organism (system) and those variables which are changed by the organism’s (systems) behavior (W.R.ASHBY, 1960, p.36).

A more or less equivalent definition by S. KATZ is: “… everything the nervous system may use as a source of knowledge” (1976, p.45).

ASHBY states: “It is thus defined in a purely functional, not a material sense”(lbid). There are reciprocal feedbacks between the environment and the system. ASHBY cites STARLING, who wrote: “Organism and environment form a whole and must be viewed as such” (1960, p. 38). Unfortunately, this necessity is very frequently ignored, even by high level systems designers of the most various kinds: physicians, engineers, economists, agronomists, etc.

In the words of M. DODDS and G. JAROS “… the environment is not a neutral laboratory, but a stakeholder with its own needs” (pers. comm.).

This very serious problem is related to J. FOURASTIE’s “Ignorance of ignorance” or to G.de ZEEUW’s “invisibility“: a part of the significant environment is not perceived.

3.”For a given system, the environment is the set of all objects a change in whose attributes affect the system and also those objects whose attributes are changed by the behavior of the system” (A.D. HALL & F.E. FAGEN, 1956, p.20).

While this very old (1956) definition looks quite rigid, HALL and FAGEN have it perfectly clear that “The statement above invites the natural question of when an object belongs to a system and when it belongs to the environment; for if an object reacts with a system in the way described above should it not be considered a part of the system? The answer is by no means definite. In a sense, a system together with its environment makes up the universe of all things of interest in a given context. Subdivision of this universe into two sets, system and environment, can be done in many ways which are in fact quite arbitrary” (Ibid).

These comments are allowable, but their importance should not be over-emphasized: In most practical cases, it is easy to distinguish the system within its environment, mostly so when it is a strongly integrated system. This being the case, the only doubt is about some border elements.

Even composite systems (snowfields, locusts swarms, human masses) can be more or less easily distinguished from their environment.

François (2004) pp. 201-203

— end paste —

For ecological, let’s refer to the encyclopedia definition for ecology, that for better or worse, refers back to environment.

— begin paste —

1004
ECOLOGY 1)
The science of interactions of living systems among themselves and with their environment.

Already in 1866, E. HAECKEL defined ecology as the global science of the organisms relations with their surrounding world, in which he included generally all the conditions of their existence.

Later on, scientists like A. LOTKA (1924, 1956) and V. VOLTERRA (1931) studied more restricted phenomena as, for example the interrelations of two or three species. Others considered mainly lesser biotopes: a meadow, a small island, a wood.

However, more recently, HAECKEL’s programme has been retrieved as it becomes evermore obvious that these restricted inquiries should be put into the perspective of a much more global understanding.

The French ecologist F. RAMADE writes: “Ecology studies complex systems; its approach is thus, ipso facto, of a holistic nature. Its object is at the summit of the organizational ladder of living systems: The simplest biological entity of its concern is the population. Furthermore and by order of growing complexity, its objects of study are colonies, living communities, ecosystems and biosphere as a whole” (1993, p.424).

Ecology is thus a typically systemic science.

François (2004), p. 182

— end paste —

With engineering an applied science, comparing the ecological to the environmental draws in questions about dealing with living in addition to non-living systems. Coauthors #TimothyFHAllen @MarioGiampietro and #AmandaMLittle reduce anthrocentricity by considering the beaver as a builder.

Abstract
This paper … it focuses on the distinction between the purpose-driven design of structures in environmental engineering and the natural process of self-organization characteristic of life, which needs to be integrated into ecological engineering.

Conventional engineering addresses the problem of fabrication of an organized structure, say a road, which reflects a goal at the outset, as well as considerations external to the road. At the outset there is an essence of which the organized structure is a realization. […] Engineers deal with the challenge of the realization of a plan at a given point in space and time.

The central dogma of biology identifies organisms as informationally-closed and this makes possible their use as machines. Ecological systems, on the contrary, are informationally-open. They cannot be used as machines to create functional structures, because they are becoming in time.

The distinction of informationally-closed and informationally-open is a clue that references to the relational biology work of Robert Rosen is ahead.

From the perspective of “life itself“, environmental engineering is seen as a branch of conventional engineering, whereas ecological engineering comes from a different place.

While organisms may be used as machines, we deny that ecological systems can be used as machines to create ecologically-engineered functional structures. Unlike organisms, ecological systems are informationally-open, and cannot be used reliably in the medium term, let alone for extended long-term periods. Rather than work as agents of creation, ecological processes act as constraints and perturbations on both the environment of the realization process and the functional engineered structure. [….]

Ecological engineering deals with structures whose realization process and structured functionality is disrupted by failure of the associative context such that the original goals must be revisited. This is due to the systemic mismatch in time scale between: (a) the pace at which human decision making and engineering operates and (b) the pace at which ecological processes update their typologies and mechanisms of control.

How much of the change outside of the system is considered?

Environmental engineering is an extension of the engineering process that considers the environment in as many aspects as are thought to be to be relevant. Engineering requires models to be constructed of the action to be undertaken, considering safety factors only at the anthropomorphic level, not at the level of the safety of the ecosystem. [….]

Environmental engineering is a branch of civil and sometimes industrial engineering. As such it remains within the purview of standard engineering protocol as it imposes an external design on material that is the passive recipient of engineered limits.

Not so for ecological engineers, whose engineered material offers no such constancy. This one fact puts ecological engineering in a different class from all other engineering, including environmental engineering.

Plants, animals and bacteria are not as predictable as steel and concrete, because life indeed has a life of its own. Environmental engineers constrain that creative force of life, so that it can be used successfully similar to the successful use of steel in civil engineering projects. [….]

In contrast to all other engineering, the ecological engineer co-opts a creative process that is intrinsic to the emergent biological structure. Engineers facing ecological dynamics would do well to pay attention to the inescapable emergence that is embodied in living material, for it demands a distinctive style and invokes a new approach. Ecological engineering amounts to surfing the vortex of some emergent property, and so is often perceived as uncertain.

The time scale with environmental engineers is shorter than that for ecological engineers.

Many of the differences between engineered and ecological material may be seen as simply an issue of relative scale. The goals for conventionally engineered material are set at a scale such that the substance of the engineered product is relatively inert over the time it takes to fulfill the goal. In the end all bridges do come down, but usually by design of demolition after the many decades, or even a century or two of service.

Over the expected life of an ecologically-engineered structure, its context may cause it to change in form and function. Meanwhile the material of the ecologically-engineered structure may also completely change its components—replaced, as the woodsman’s axe that is five handles and two heads old.

If ecologists built bridges, they would commonly fall down.

The third section explicates the “distinctive character of ecological over environmental engineering”, delving into contexts and ecological processes. The story is a little easier to follow in describing beavers as ecological engineers.

During the summer, beavers consume high quality herbaceous foods in an opportunistic fashion (Svendsen, 1980, Jenkins, 1981), but during the winter, they are forced to consume low quality woody materials in a selective fashion in order to survive. The goal of the beavers in this case is to obtain food throughout the year (Fig. 8).

Fig. 8. At this scale, the ecosystem-specified essence of function is to safely obtain energy from plant biomass. As in Fig. 6, they can pursue different typologies of realization strategies, dependent upon high (A) or low (B) quality resources. The realization again changes the context (C), affecting future realizations (D), and eventually feeding back to the large scale context of site organization when energy can no longer be obtained within the current site structure (E).

The selection of the type of food to be adopted in the realization stage changes with season. Beavers move from obtaining herbaceous forage for beaver growth to foraging woody materials for beaver maintenance. This can be seen as the use of two different typologies of realization, like some sort of tunnel versus some sort of bridge in examples given in Fig. 1. Meanwhile the reference is to the same essence of function (capturing exergy carriers in the form of plant biomass—getting food). The changes are forced by changes in associative context. By changing typologies and realization strategies, beavers are able to deal with behavioral instability arising from limits encountered at the lower level when some types of food become unavailable.

The idea of gain is further explicated in a 2009 article on the concept of gain in ecology.


The numbers beside the encyclopedia entry mean …

  • The following special markers have been used, in order to enhance the usefulness of the encyclopedia:
  • 1) meaning “systemic on a wide range”, or “general information”
  • 2) meaning “general abstract or mathematical model”, or “methodology”
  • 3) meaning “epistemologica! or ontological aspects”, or “semantics”
  • 4) meaning “practical in human sciences”
  • 5) meaning “more specific or disciplinarian”

In this paper-first encyclopedia, the bolded text is link to other entries.

Reference

Allen, T. F. H, M Giampietro, and A. M Little. 2003. “Distinguishing Ecological Engineering from Environmental Engineering.” Ecological Engineering, The Philosophy and Energence of Ecological Engineering, 20 (5): 389–407. https://doi.org/10.1016/j.ecoleng.2003.08.007 .

François, Charles, ed. 2004. International Encyclopedia of Systems and Cybernetics | 2nd ed. De Gruyter Saur. https://doi.org/10.1515/9783110968019.

#ecological, #environmental

Christopher Alexander’s A Pattern Language: Analysing, Mapping and Classifying the Critical Response | Dawes and Ostwald | 2017

While many outside of the field of architecture like the #ChristopherAlexander #PatternLanguage approach, it’s not so well accepted by his peers. A summary of criticisms by #MichaelJDawes and #MichaelJOstwald @UNSWBuiltEnv is helpful in appreciating when the use of pattern language might be appropriate or not appropriate.

A distinction is made between Alexander’s first theory of architecture (1964), and a second theory (1975-1979) for which he is mostly widely known, and then a third theory (2005-2007).

Christopher Alexander’s ‘first theory’ of architectural beauty was presented in his Harvard doctoral thesis and later published as Notes on the Synthesis of Form (Alexander 1964). The inspiration for this work is Alexander’s belief that the buildings of traditional societies are inherently more beautiful than contemporary architecture. [….]

When applied in practice, Alexander discovered that this process was too demanding for all but the largest design projects.

This led to a second theory, coauthored with collaborators in the Center for Environmental Structure.

Alexander’s second theory, itself a collaborative process, was developed across three canonical books; The Oregon Experiment (Alexander et al. 1975), A Pattern Language (Alexander et al. 1977) and The Timeless Way of Building (Alexander 1979). Collectively these three works constitute one of the 1960s and 1970s most sustained criticisms of modernism. [….]

… intuitive and unconscious processes were vital components of traditional and vernacular architecture … [and] the importance of cognitive cohesion, vitality and piecemeal growth as part of a vibrant built environment … All of these concepts were central to Alexander’s second theory of architecture, which again focused on the inherent beauty of traditional urban spaces and buildings.

The third theory has been less popular, but well known to disciples following Alexander’s work.

Ultimately however, Alexander rejected his second theory of architectural beauty as he felt it had too little generative power and too little focus on geometry. Three decades later he proposed a ‘third theory’ of beauty, which replaced patterns with the generic concept of ‘centres’ and their transformations, in addition to removing much of the neatly packaged social and architectural content that makes his second theory so compelling (Alexander 2002b, c, 2004, 2005; Adams and Tiesdall 2007).

The second theory, particularly A Pattern Language, has had the most influence outside of the built environment. It is on this work that the criticisms are analyzed.

Following the publication of his second theory, Alexander bemoaned a lack of engagement from architectural and design professionals which might be partially explained by criticisms of the development and documentation of this theory (Kohn 2002). The barriers preventing architects from engaging with Alexander’s theory can be broadly categorised into three groups (Table 2).

Fig. 3
Criticism connections and groupings of Alexander’s second theory of architecture: Implementation and outcomes. Numbers correspond to criticism numbers in text and tables, dotted lines indicate groups and sub-groups of criticisms, arrows point from antecedent criticisms to secondary criticisms or groups of criticisms

The first group [5, 6, 7, 8, 9, 10, 11, 12] arise from Alexander’s idiosyncratic understanding of ‘science’ (4) and subsequent issues including an absence of explicit definitions which makes practical engagement with the theory difficult.

The second group [9, 13, 14] focus on Alexander’s ambivalent use of the term ‘empirical’ to describe his theory, the progenitors of which include both his definition of ‘science’ [4] and belief in one ‘right’ way of building [3] (Fig. 2).

The final group [15, 16, 17, 18, 19] contains criticisms primarily related to the development of Alexander’s theory, including issues such as faulty reasoning that arise primarily from his argument that there is only one right way of building [3]. The problems identified in the second and third groups contribute to further criticisms of the implementation and outcomes of Alexander’s theory.

The pursuit of beauty is admirable. The science behind it is difficult.

Reference

Dawes, Michael J., and Michael J. Ostwald. 2017. “Christopher Alexander’s A Pattern Language: Analysing, Mapping and Classifying the Critical Response.” City, Territory and Architecture 4 (1): 1–14. https://doi.org/10.1186/s40410-017-0073-1.

#nature-of-order, #pattern-language