The You is I

You can criticize the love you’ll never have
Staring at the ghosts of lifetime’s past
Deaf and dumb you cry without a tongue
Holding on to memories burning numb
Forget the guts you scorched with aching arms
The villainous soothing of age thirty-one
But you were seventeen, seventeen years young
When you embraced the ones you should have hung
Like thieves that creep around your skull
Your thoughts unwinding down a corridor
Remember how you laughed at that poor soul
That soul is you with head hung low
In icy winters with no warmth to spare
I stalk fires and still catch you there
A lonesome face with gloomy eyes
It’s a horror tale written in comic disguise
Forgive my blunt effrontery of words
I, too, have done shit that hurts
But I still breathe with life that’s raw
Beauty imagined is beauty drawn
I have one sharp apology to weave
I’ve been deceived, deceiving — and believed!
I’ve hidden fragments of life so sought
Little pieces waxing in my thought
For thirty one is not an age
He’s not some distant prophet, priest nor sage
Unfortunately, he’s something else
The lines I wrote to hide myself…

Why I am a Democratic Socialist: Capitalism and its Exchange of Ethics for Economics

I was born in Krasnodar, Russia just before the collapse of the Soviet Union. My parents brought me to the United States a year before the collapse actually took place. I guess you could say I escaped communism by the skin of my teeth. Growing up, I heard all the ridiculously hilarious tales my dad told me about how he ran several businesses at once, all in secret from the government, just to make enough money to live relatively comfortably. To be free from suspicion as to how he made his money, my father worked as a city bus driver. The pay was almost as ridiculous as the things he did to survive. By day, he would drive the bus; by night, he would jack it up, attach a machine to the odometer, and make sure it spun all night! He would get paid for the amount of miles he drove. He didn’t drive many in reality, but his “nightly business” gave the appearance that he drove people day and night. And so his pay was decent. He could put food on the table, as long as his odometer kept increasing its digits in leaps and bounds.

My dad, the entrepreneur, also grew flowers. He grew lots of them. Every March 8th—International Women’s Day, for those who don’t know—was my dad’s version of Black Friday. In those days, even during the relatively oppressive Russian communist regime, people practiced romance. They’d buy flowers for their lovers with money secretly stolen from the government. In those days, virtually the entire expanse of Russia could be considered “government property.” And so anything and everything belonged to “the government.” Nobody really knew who “the government” was but they all knew it was surely not them. And in such a way all theft and every theft became theft from the government.

It would come as a surprise for me to think that “the government” did not know that people were stealing from “it.” Think about it. You give people a salary of sixty rubles a month. But rent is seventy rubles. Food is another twenty. It doesn’t take a mathematician a long time to figure out that people were surviving in Russia against all odds. In other words, you’d be an idiot to think that people who were alive during a famine were not hiding food. The fact that you could survive in Russia should be seen as an impossibility. You had to be stealing. In fact, you, and you, and you over there—all of you—had to be stealing. The communist regime produced a lot of competent thieves—for they all en masse became thieves. This sort of regime could not possibly last long. And—thank God!—it didn’t. On December 26th 1991 the Soviet Union became no more. It vanished—and not a tear was shed…

In the year of 1987, Gorbachev, then president of the U.S.S.R., tried to implement policies that would make the Soviet Union more democratic—it was known in the Russian as the process of “demokratizatsiya.” Implementing a democratic government proved harder than he thought and so nothing ever really came of it.

Anyone living in those days would not have ever called Russia a “democratic” country by any means. The people had no say in government. They could not democratically choose to be a capitalist society, for example. They could not democratically choose to be a socialist society either. The rules that governed their world, their economy, came from a group of elites. And the rules and regulations favored the elites. The common people were left to fend for themselves. This was no America: there was no working democracy in Russia.

To be honest, I’ve always been fascinated with communism, socialism, and capitalism. There was something demonically sacred about communism. It was the thing everybody here feared. And I was born in a country full of communists. I was, to be blunt, “one of them.”

I never actually cared much for politics and economics growing up. Sure, I thought communism was a spooky word, but I never really studied any of it. It wasn’t until I graduated nursing school, and had begun working as a nurse at a hospital, that I started thinking about the way our country did healthcare. And believe me, I was scared shitless. I didn’t know much. And I still don’t know much. (How many people have read the Affordable Care Act? I mean, please, the senators didn’t give a rat’s ass about it—and didn’t bother to read it—so why would a commoner bother to read several thousand pages?) Truth be told, I still don’t know much about healthcare. Moreover, none of the doctors I’ve met, nor the nurses I’ve spoken with, knew much about healthcare policy. No one I’ve asked seemed to know what Obamacare was all about. Nobody seemed to know what it all meant. Not the health professionals, not the therapists, not any of the hospital staff, not the senators. It made me wonder: who the hell wrote this? Well, I never did find out…

The more I worked in the hospital setting, the more I became convinced that we had become a business that was trying to make money. We were all being forced to cut corners, to practice shitty nursing care, just so that somebody on top could make money, get their year end bonus. Most people outside of nursing probably don’t have a clue as to what I’m talking about. Most people outside of doctors probably don’t know what I mean when I say: we have to do a million things at once. And that makes for unsafe nursing practice. In fact, in Washington State we have something called “ADO Forms” we can fill out if we feel the hospital is not staffing us safely. These are “assignment despite objection” forms. Basically this means that I accept this patient load under the condition that my professional judgment is that I cannot safely provide the quality nursing care I believe I would be able to provide had I had less acute patients. In other words, the patients are too acute and I’m going to be swamped. And if I’m swamped, expect errors to be made. The hospital, then, would, allegedly, take liability for any errors committed on that nurse’s shift. Such is a day in the life of a nurse.

I would like to make it clear that none of what I am saying here is controversial. There are no hospitals that I am aware of that staff so safely that their nurses are very satisfied. If you don’t believe that this is an ongoing issue, start reading a little bit about California State law regarding their nurse-to-patient ratios. For different units, depending on acuity, nurses are assigned different patient loads. If you are a nurse working ICU, the law states that there shall be one nurse for every two patients. If you work on the telemetry unit, as of 2008, the nurse to patient ratio is 1:4. Those working on the medical surgical floor are staffed 1:5. This is what the nurses, the healthcare professionals, and, eventually, state law decided is best when it came to staffing at hospitals. A law had to be made because staffing had become an issue. For the majority of states in the United States, there are no state-enforced ratios. The hospitals can literally do whatever the hell they want. If the CEO and the board of directors decide to cut staffing, well, there’s really not much the nurses and doctors could do about that. When was the last time you heard anyone making a fuss about safe staffing at hospitals? You’ve probably heard more about that new stadium the college kids want. Too bad our hospitals are falling apart. So long as we have that stadium built, we’ll all be merry. And so something as critical as healthcare—good, quality healthcare—is left on the sidelines, waiting for some kind of Jesus or Good Samaritan to come around and resurrect it back to life.

I apologize for boring my readers with nursing gibberish and something as essential to life as healthcare: I assure you, I mean well. It wasn’t until I became an angry nurse, one who wanted to do something about the healthcare we provided, that prompted my immersion in the Washington State Nursing Association, my state’s formal nursing union. You see, all of my “socialist” activities since have had their initial birth right there in a hospital setting. The rest of my story is history.

I was angry with hospitals. I was angry with staffing. I was angry that a young, athletic and fast guy like me could not keep up with the system. I could not keep up with charting. I could not keep up with dressing changes. I could not keep up with providing assistance with my patients’ activities of daily living. And certainly I could not provide a shoulder to cry on or a second opinion. I was way too busy to do any of the normal “human” stuff. This was a business, and we had to make money. Money, money, money. Everybody wanted some money.

I understood the need for money. But I never understood something as simple as providing one additional staff member—let’s say a nursing assistant “valued at,” roughly, $40,000 per year—just so that our unit could function well. Because, as all of the nurses know, one fall in the bathroom per year can cost your facility a million dollars in a lawsuit. And we heard about those lawsuits, believe me. And they could have been prevented. All you needed was an extra set of hands. We weren’t asking for much. But what do nurses and doctors know about healthcare, right? I mean, doesn’t the CEO know that a patient admitted to the hospital with a right hemisphere stroke tends to be impulsive, and is at a high risk for falls? And, when left alone in the bathroom, is almost guaranteed to attempt to get back to bed—unsafely—on his or her own. That’s what happens when you have a stroke that affects that part of your brain. But I’m just a nurse.

I joined the union and I ended up being one of the five nurses from the hospital that renegotiated our contract with the hospital. We went through the entire thing, line by line. Unlike the senators, we knew the thing inside out. It was highlighted to the point of becoming so saturated in color that the paper ignited our room in flashes of neon yellow. We underlined words we wanted changed; we looked up Washington State “codes”; we included clauses that we thought would serve the interests of our hospital’s nurses and patients. We did our best, no doubt about that. It was a long seven-month process. Unlike the members that worked “defending” the interests of the hospital, we were not getting paid a penny to be there. In fact, lunch, parking, and all other associated costs, fell solely upon us. If we wanted to make this hospital a good place to be a patient at, and a good place to work, we had to want that. Really bad. And want that we did.

I remember a conversation I had with our labor attorney. She was a middle-aged woman with dirty blonde hair and a gentle smile. She would sit there listening to all the nurses point out the strengths and weaknesses of the hospital. She’d let us rant for minutes and then interject with a brief, “I like that!” She would, then, proceed to write whatever you said down. During one of our lunch breaks, I asked her why she did what she did.

“Why defend nurses?” I asked her.

“Why not work for the hospitals and make the big bucks?” she asked me. “I could never do that,” she continued. “I could not do that ethically. Never.”

For Laura, making a little over forty-thousand a year was more satisfying than making six figures and helping destroy this country by allowing hospitals to essentially become businesses more interested in money than in providing quality healthcare. The attorneys fighting unions in defense of the hospitals were essentially fighting for corporate interests. These guys do not give a damn about you or your health. What matters to them is how much they can get away with while making a fat profit. That’s why hospitals hire the nation’s best attorneys.

Laura did not think that ethics should ever be compromised by economics. In fact, economics should always be subservient to ethics. If you were to be called a good person, you needed to act in the most ethical manner possible. And sometimes, especially in healthcare, this called for acting in a very un-economical manner. “People over profit,” as them dirty socialists say.

I didn’t use the word “socialist” above unconsciously. In fact, I used the term precisely because democratic socialists surrounded me the entire time I was working with the union. Over coffee, during and after meetings, so many of our conversations turned to politics, women’s rights, human rights, and economic equality. The people that surrounded me were some of the kindest individuals I had ever met in my life. These ladies were the very epitome of moral leadership. My sense of morality was being nourished and sustained by these conversations and our work for the nursing union. Every time we met, I thought more and more about politics and economics…

Take Fran, for example. She was a jolly woman, somewhere in her sixties, who participated in women’s rights demonstrations, strikes outside of hospitals, and was a proud participant in the hippie movement during the ‘60s and ‘70s. She would let homeless people into her home, feed them, and rant non-stop about social justice. She never stopped talking. I began calling her “Frantic Fran.” And, as she knew by then, I refused to participate in her “Fran-tasies!” She read books by the socialist writer Chris Hedges and Cornel West. One night I invited her to come with me and see Hedges at an event at the Bing Crosby Theater. She gratuitously accepted and spent the night refusing to eat my popcorn, listening voraciously to Hedges critique the corruption in our government. Outside of her denying me the pleasure of sharing popcorn with one of my professional colleagues, I think Fran qualifies as a moral leader. She is compassionate, she is involved, she knows what she is talking about, and she cares deeply about the things she engages in. Her actions are the direct result of her thoughts and words.

Then there was Cheryl. She was the sweetest and gentlest of the bunch. She had a gorgeous smile and bright eyes. She radiated a certain grace. During one of the times in which we asked nurses to fill out their concerns regarding hospital staffing, one or two nurses—who were Republicans and, naturally, could not stand unions—met us. Prior to us meeting them in person, Cheryl took me aside and said, “Look, Moses, some of these nurses here don’t like the union. They think it’s bad. They will try to hinder our progress. They will monitor our activities and may report us if we do anything that does not comply with hospital policy [such as talking to nurses about the union while they are actively working and involved in patient care]. In such cases, we’ll just smile, offer coffee and cookies, and move on. Moses, don’t be angry at them. You are doing them good—they simply don’t know what you are doing for them.” Cheryl here was echoing that prophet’s words: “Father, forgive them, for they know not what they do.”

Somewhere during my life as a nurse, a college student studying theology, and working with the union, I began reading Robert Reich’s works. The things Reich wrote about struck a chord with me. He was dealing with the same issues I was dealing with. This was real. This wasn’t economic theory. This wasn’t some bullshit Hollywood one-night flick. This was my life. These were my patients’ lives. I watched his documentary Inequality for All and found myself dumb-founded. (The documentary was recommended to me by our hospital’s own medical director. These damn socialists are everywhere!) The economy was rigged and nobody was doing anything about it. CEOs were barely squeezing their fat assess through the bank doors to cash their insane checks—checks they wrote to themselves. A select few were reaping the majority of the country’s money. Big Pharma was having a cakewalk buying out lobbyists, senators, and scientific studies left and right. In fact, the pharmaceutical industry became so successful at purchasing studies that in 2005 John Ioannidis, a Stanford epidemiologist, titled his research study Why Most Published Research Findings Are False. Ever since it has become the single most downloaded technical paper in the history of PLOS Medicine’s existence. This shit was pervasive. Our scientific community was being handed over to corporations who didn’t have any sense of ethics. They didn’t give a damn about right or wrong. They didn’t give a rat’s ass about false research, so long as they made a dollar or two.

They placed profit over people. They made economics subservient to ethics. They did what the democratic socialists feared all along: neglected their sacred duty to be good, ethical people. They had welcomed in the greedy, all-consuming hands of unfettered capitalism. A virus so sickening even an ethical human being—when infected with it—fails to abide by simple, universal principles of right and wrong; simple things like “Don’t lie” fall on deaf ears. But that’s what happens when a society, a democratic community of people, allows economics to be the end-all, be-all of human flourishing. When ethics are thrown out the window, all shit is permissible. If ethics do not exist, all things are permissible. And when unfettered capitalism pits economics against ethics, it doesn’t take an MLK to figure out which one goes flying out the window first.

At our meetings with the hospital we asked for safe staffing. We had all of us deliver “speeches” to the hospital’s attorney. Some of us spoke like a Demosthenes. But that was, mostly, to no avail. The attorney, after one such speech, told the nurses to quit offering her “sound bites.” She didn’t give a shit about patient safety or the concern of the nurses. We spent one such meeting discussing safety concerns for something like ten hours. The following day, the hospital’s human resources administrator sent out an email “summarizing” the efforts of the union (in my own paraphrase):

“The Washington State Association of Nurses is requesting that all employees pay union dues. We believe that employees should be allowed to exercise their right to choose whether they would like to be a member of the union and pay union dues. Therefore, we are not in agreement with the union. Negotiations are expected to resume on…”

Reading the email, I realized exactly what it felt like when biased journalism was being passed as dogmatic truth. Here was a summary of our activities, and, while it stated something that was true, failed to convey the atmosphere of those meetings. We were not emphasizing mandatory union membership for all new employees. We were not asking for wage increases all day. We were asking for safe staffing. And we spent the majority of our time giving reasons why. That, in short, was the real concern of the union and the nurses. (Excepting one nurse who was more concerned with money [she was the only bad apple on our team].)

The email hit me like a ton of rocks. I now was able to subjectively relate to those people who read magazine articles about themselves. And the article had (almost) nothing to do with me.

The emails the hospital sent out had a clear agenda: convince the nursing staff that the union was a thorn in their side. Despite this false propaganda—and it wasn’t explicitly evil, it was mostly subtle—we continued having staff meetings, served coffee, and discussed the need for a strong union at the hospital. Twice a month or so we’d meet at the collective bargaining table with the hospital. In between those times we’d meet separately with our attorney, union representatives, and nurses to discuss the issues we’d discuss at upcoming meetings. We took notes, wrote “speeches,” and essentially came up with every argument and counterargument to safe staffing ratios at our hospital. We had scientific research papers showing how state legislated nurse-to-patient ratios, such as those found in California, in comparison to similar states, actually saved hospitals money due to a decrease in infections post surgery secondary to better staffing ratios.[1] A recent article published July 14th 2015 in Scientific American was titled “Widespread Understaffing of Nurses Increases Risk to Patients.”[2] The blurb below the online version read: “Emerging data support minimum nurse-to-patient ratios, but hospital administrations are reluctant to adopt them.” Such was the state of staffing nationwide. And as of the date of this writing (December 22nd 2015) California remains the only state in the entire country that mandates nurse-to-patient ratios by unit. No other state does this. Why? Aren’t there laws in this country mandating how hospitals should run? Well, sort of. The law you are talking about is probably the shitty 42 Code of Federal Regulations (42CFR 482.23[b]). The section you are thinking of states, and I am not joking, “The nursing service must have adequate numbers of licensed registered nurses, licensed practical (vocational) nurses, and other personnel to provide nursing care to all patients as needed.”[3] That’s it. The nebulous and vague language is as weak as a Tweety Bird facing a Marshawn Lynch in beast mode. What in the hell does “adequate numbers” mean? We all know—and I am using the categorical “all” here—that hospitals nationwide are not being staffed adequately. And who is determining what is adequate? The nurses. And I know that they know that they aren’t being staffed adequately. But the hospital administrators—about as detached from healthcare as a bed bug is from beauty care products—have no idea what the hell adequate staffing is. They sure as hell know how to make a buck or two, but don’t give me the nonsense that they understand nurses, doctors, or the needs of patients. They don’t.

It’s no surprise, then, that we never got safe staffing done at our hospital. We never got ratios put into our contract. Of course we knew it was next to impossible, but a couple of us decided we’d let the administration know just how the nurses felt. Out of seventy-five nurses at this hospital, something like forty-five wrote small cards stating their support for our proposals regarding safe staffing and ratios. This wasn’t, in other words, something controversial at our hospital or something we, like a despotic regime, were trying to force upon a non-compliant majority. In fact, truth be told, we were the majority. But not all stories, as I’ve grown to learn, have happy endings. Ours certainly didn’t. However, there were a few things that we did change. We included a code from Washington State that mandated safe staffing committees at hospitals. We copied and pasted it right into the contract. It would, theoretically speaking, give nurses some negotiating power when staffing went downhill. We could, in theory, at least point to the wording and say, “Look, the safe staffing committee doesn’t think these ratios are safe.” We did do that. We also, somewhat reluctantly, spent time negotiating our wages. In comparison to nearby hospitals, we were behind by something like twenty percent. They ended up giving the entire nursing staff, across the board, a two-dollar and fifty-cent raise. They figured, for unknown reasons, that this was a good idea but increasing staff members was not. I have no idea why they did what they did, but they did. So we took our money—and our real fight (i.e., safe staffing)—packed our bags, and headed home.

In all honesty, I left a better—more informed—person. I may have lost the battle, but I have not lost the war. I took the war with me, brought it home, brooded over it for weeks. The weeks turned into months. And here I am, months later, still contemplating all of these real issues. How is it that unions are so weak? How is it that, in America, we no longer care about unions? How is it that at my own hospital nurses were fighting against us—against their own?

Rewind my life a couple of years and you’d find me standing behind a cash register at a T.J. Maxx store in a white dress shirt, a tie, and khaki pants. I was on my way to nursing school, finished my prerequisites, and was killing time during the long months I spent on the waiting list. After I signed on with T.J. Maxx, the managers gathered us into a room during orientation and played us a video. At that time—and this was a long time ago—they still used videocassettes. So here was this chubby, happy-go-lucky manager with whiskers and a thick Italian accent trying to teach us youngsters why unions were scary. He probably was merely a talking head for the corporation doing his job. I now doubt he had any idea as to what he was talking about. He probably got a memo with a basic script, read it to us, and went home to a nice wife, two kids, a dog, all in a wealthy suburb. Even then I never understood why corporations like T. J. Maxx feared unions. What was so bad about people uniting? What was so dangerous about people having power? What was so bad about a democratic process.

And—there—I said it: this was about democracy. The corporations hated democracy. They hated the fact that regular people like you and I could gather together and tell them—nay, demand—certain rights. We, as a collective bargaining unit, could voice our concerns; we, as a community, could have a say in the way we are treated, the wages we are paid, and our working conditions. And two or three big wigs at the top did not give a rat’s ass about your rights. So long as you left them alone, gave them 300x the amount of money their average employees made, they were happy. But they would not be happy for long. Why should the CEO only make 300x more than his or her average employee? Why not 400x more? Why not 500x more? Eventually the CEO asks himself: Why the hell should we even pay these fuckers at all?

            Robert Reich, in his latest book Saving Capitalism: For the Many, Not the Few, writes about what he calls “the meritocratic myth.” This a capitalist myth invented by the rich and wealthy to keep little people sucking on their thumbs for life. Essentially it goes like this: in a capitalist society people are paid what they are worth. So, if you have a CEO making a billion dollars a year and a worker at her corporation making six dollars an hour, well then, so be it: the worker must only be “worth” that much. But there is a historical problem with this. (History tends to reveal all kinds of problems, in my experience.) As Reich writes,

“Anyone who still believes people are paid what they’re worth is obliged to explain the soaring compensation of CEOs in America’s corporations over the last three decades, relative to the pay of average workers—from a ratio of 2 to 1 in 1965, to 30 to 1 in 1978, 123 to 1 in 1995, 296 to 1 in 2013, and over 300 to 1 today. Overall, CEO pay climbed 937 percent between 1978 and 2013, while the pay of the typical worker rose just 10.2 percent.”[4]

Clearly, as history shows, CEOs are increasingly making more as we are, when our wages are adjusted for inflation, increasingly making less. How is it that in a capitalist society, which calls itself a democracy, we have a large portion of people making these corporations what they are but not being compensated for their work? Why is it becoming increasingly common to think that a CEO—somehow in isolation from all the employees working with him—is the only one worthy of his wages? If you think this is the case, congratulate yourself: you’ve bought into a myth they want you to believe. It’s like a child’s belief in Santa Claus. There’s no empirical proof for Santa’s existence, but it keeps the naughty kids in check. And all you naughty workers need to suck your thumbs and suck it up: life ain’t fair. The CEOs make a lot of money. Deal with it. But how do they continue to make so much money? One reason is that they have money, and money gives one access to power. Access to power gives one access to lobbyists and senators. You get those guys to write a bill for you, favoring you, and you’re good to go. This is why even when CEOs screw up, they still get paid—for the majority of people in America believe all kinds of myths, and the myth of meritocracy is one of them. Take Martin Sullivan, for example. He made $47 million when he left his company AIG. The company’s share dropped almost a hundred percent under his leadership. But CEOs are paid what they’re worth, you say? Thomas E. Freston, the CEO of Viacom, ended up getting a severance package of $101 million after being fired.[5] The list goes on and on ad infinitum.

You know you’ve complained about the lazy McDonald’s worker—and rejoiced when he was fired—but when was the last time you complained about the CEOs and their pay? The rich and powerful have always prided themselves on being able to make the little people wage war on littler people. Seldom do the poor gather together and wage war on the elite who enslave them, the ones who are responsible for the majority of their problems. Welfare is an issue? Are you kidding me? CEO severance packages are the issue. Stop comparing fleas with elephants in the room, pal.

And guess who’s paying for the CEO pay? You and I. You heard that right: you and I. Yes, we’re paying. “[C]orporations deduct CEO pay from their income taxes, requiring the rest of us to pay more proportionately in taxes to make up the difference. To take but one example, Howard Schultz, CEO of Starbucks, received $1.5 million in salary for 2013, along with a whopping $150 million in stock options and awards. That saved Starbucks $82 million in taxes.”[6] And you, my friend, the one residing in Washington State, subsidized his pay. We are responsible for the $82 million dollar loss in tax revenue. Congratulate yourself. And next time you pay taxes, remember, some of that money is going to Howard Schultz. Literally.

And while the (mostly) Republican fan base protests an increase in minimum wage, try to swallow the $26.7 billion paid out to the already rich Wall Street bankers in bonuses alone. This “would have been enough to more than double the pay of every one of America’s 1,007,000 full-time minimum wage workers that year.”[7] But enough about the majority of people residing in America. Who cares about those guys, right? All you have to do is work hard. Get a degree. You’ll be fine, they said. Well, that’s no longer true, either. “[B]etween 2000 and 2013, the real average wages of young college graduates declined.”[8] In the past, even a factory worker could provide for his family, stay-at-home wife, and three kids. He could buy a small home in a good neighborhood, and own two new cars. Today, that’s not the case. The worker is making shit, his wife is making shit working full-time, and they can’t afford children or good healthcare. The cars they drive are owned by some big bank. The house they live in is owned by the same bank. The degrees they both hold are in debt—to the same bank. Nothing is theirs. They are, no doubt, slaves to Wall Street. They work, they breathe, they live to pay some dude at the top. That’s the reality of modern America. But what happened? Did our GDP decrease? Did something happen that could explain this profound change in the economic reality of many Americans?

“Since 1979, the nation’s productivity has risen 65 percent, but workers’ median compensation has increased by just 8 percent. Almost all the gains from growth have gone to the top.”[9]

That, my friend, is what happened. No, it wasn’t the Mexicans; it wasn’t the Muslims; it wasn’t the immigrants; it wasn’t the bum that caused your problems. It was the rich and wealthy people mostly populating a small section in New York called Wall Street, and the rest of the Wall-Street-inspired, greedy CEOs.

Let’s play a little game of comparisons. Let’s have Reich take us back a couple of decades.

“Fifty years ago, when General Motors was the largest employer in America, the typical GM worker earned $35.00 an hour in today’s dollars. By 2014, America’s largest employer was Walmart, and the average hourly wage of Walmart workers was $11.22…The GM worker was not better educated or more motivated than the Walmart worker. The real difference was that GM workers a half century ago had a strong union behind them that summoned the collective bargaining power of all autoworkers to get a substantial share of company revenues for its members.”[10]

And there you have it: the solution to our current crisis. It’s been in American history—our history—for a very long time: we need strong unions.

Do we have any real evidence that unionization will save our nation? I think we do. As Reich points out,

“Some argue that the decline of American unions is simply the result of ‘market forces.’ But other nations, such as Germany, have been subject to many of the same ‘market forces’ and yet continue to have strong unions…In contrast to decades of nearly stagnant growth for most Americans, real average hourly pay in Germany has risen by almost 30 percent since 1985.”[11]

Here we see two countries going through the same technological revolution that has engulfed America with vastly different results, on an economic scale, for the middle- and working-class. One key difference is that unions are still alive and well in Germany.

Returning for a brief second to my claim that we need to increase the minimum wage, critics—those people who live in castles made of cloud-stuff and write with immense knowledge from their Ivory Towers—claim that increasing the minimum wage would result in massive unemployment. But has this actually been the case? Not at all.

“Research by Arindrajit Dube, T. William Lester, and Michael Reich confirms this. They examined unemployment in several hundred pairs of adjacent counties lying on opposite sides of state borders, each with different minimum wages (one at the federal minimum, the other at a higher minimum enacted by a state) and found no statistically significant increase in unemployment in the higher-minimum-wage counties…”[12]

There are other reasons why we need to increase our minimum wage. And this one involves the rich and wealthy yet again. (I hope you’re starting to see who’s really causing a lot of our problems.) This may not be news for you but: corporations want to pay as little as they can to their workers. But why? Well, because they can make more money that way. But in paying their workers a minimum wage that is no longer the “living wage” is was meant to be, the rest of us tax payers subsidize companies like Walmart and McDonald’s. Here’s how.

People who get paid a minimum wage are usually on government-subsidized programs like food stamps, Medicaid, and other such programs. Where is that money coming from to pay for these programs? From you and I, yet again. Since McDonald’s doesn’t want to pay their workers living wages—wages they can survive on—they make the rest of us pay for their employees. Next time you walk into a McDonald’s remember that you are helping these workers get their paycheck. Walk in like you own the place (because you do).

If you think I’m kidding, here’s another statistic for you:

“[I]n 2012, 52 percent of fast-food workers were dependent on some form of public assistance, and they received almost $7 billion in support from federal and state governments. That sum is in effect a subsidy the rest of American taxpayers pay the fast-food industry for the industry’s failure to pay its workers enough to live on.”[13]

We the people have come to a point in the crossroads that is crucial. If we were Jesus, this would be our “cleansing of the temple” moment. Somebody has got to stop this. Somebody has got to take a stand and defend unions, fair wages, and stop corporate greed.

In the 1950s, something like thirty percent of the private sector was unionized. Today, that number is somewhere around seven percent. And, not coincidentally, as the union rates decreased, wages decreased along with them.[14] This issue is very personal for me too. I moved from a unionized state (Washington) to a right-to-work state (Georgia). I knew I was in for a pay cut. What followed, however, could have come straight out of communist Russia. I went from making $30.32 per hour (along with evening shift differential of $2.50) to making $27.66 (with no shift differential). That was the best pay I could get—with a letter written on my behalf to the CEO. They said cost-of-living was cheaper in the south. I have found that to be mostly untrue. Starbucks is the same price everywhere you go. And that’s the reality I live everyday. You make less, you spend more. During the time I spent looking for a job, I was offered wages as low as $24 per hour (at my previous hospital, new graduates were started at over $27). The point being that I mostly live paycheck-to-paycheck now. I know exactly they mean when they say, with a meh look on their face, “I live in a right-to-work state.” Wisconsin has this year become a right-to-work state under their governor Scott Walker, a whore in bed with the Koch brothers. Slowly but surely, many Republicans, corporations, Wall Street, and the Koch brothers would like to make a minimum wage obsolete. It would make their lives so much easier. Then they could pay you anything they liked.

In the latest research comparing right-to-work state wages versus non-right-to-work states, those living in right-to-work states (states located mostly in the south) made 3.2% less.[15] The next time you hear a (usually) Republican presidential candidate or senator talk about the beauty of the “free market” and the wonder of “right-to-work” laws, try to do what scientists do: follow the evidence. Don’t let anyone tell you something that is not grounded in reality. The people living in the American South are poorer than those living in the North. That’s the simple truth. If you wish to join their ranks, vote for any Republican of your choosing. If you wish to improve your working conditions and wages, vote for whoever it is that is a big advocate for unions (“collective bargaining”)—and these guys are seldom Republican.

My essay must now come to screeching end. It’s become longer than I initially intended it to be. I thought it’d be short and sweet. Lo and behold, it is long and bitter. I hope my reader is more informed since reading this writing. I believe in democracy not because it is perfect, but because it is the best thing we have. I identify with many democratic-socialist ideals because I think they are the best we have. I do believe in unions and a democratic community of people that isn’t afraid of saying, “Yes, we believe in a government that is not afraid of regulating Wall Street and economics.” Call this a very, very weak form of socialism. Oh well. I’m not here to defend terms or to hide behind words. I identify with many of the concerns Bernie Sanders has. Having said that, I think it’s safe to call me a democratic socialist. I don’t think I, or people like me, have all the answers. We don’t think we do. I’m not saying my ideas will not create problems and unintended repercussions. All ideas do that. I am saying that this—all the words written above—is a good place to start. I am saying that we must not allow ethics to be replaced by economics. I am saying that we need a strong democracy. I am saying that we need to be able to have large—in Trump’s words, “y-uuuge”—unions. Finally, I hope that after you’ve finished reading this—long after—the stench of unfettered capitalism fills your nostrils causing you to increasingly spend a disproportionate amount of time sniffling and wondering what the hell went wrong…


Written by: Moses Y. Mikheyev



[1] P. G. Shekelle, R. M. Wachter, and P. J. Pronovost (eds.), “Making Health Care Safer II: An Updated Critical Analysis of the Evidence for Patient Safety Practices,” AHRQ Publication No. 13-E001-EF (Rockville, MD: Agency for Healthcare Research and Quality [US], 2013). Available at: See also “Safe-Staffing Ratios: Benefiting Nurses and Patients,” Department For Professional Employees, AFLCIO, accessed December 22, 2015,

[2] Roni Jacobson, “Widespread Understaffing of Nurses Increases Risk to Patients,” Scientific American, July 14, 2015, accessed December 22, 2015,

[3] 42 CFR 482.2—Condition of Participation: Nursing Services, accessed December 22, 2015,

[4] Robert B. Reich, Saving Capitalism: For the Many, Not the Few (New York: Alfred A. Knopf, 2015), 97.

[5] Ibid., 104-5.

[6] Ibid., 105.

[7] Ibid., 111.

[8] Ibid., 117.

[9] Ibid., 123.

[10] Ibid., 126-7. Italics mine.

[11] Ibid., 127.

[12] Ibid., 136.

[13] Ibid., 137.

[14] Ibid., 89, 131.

[15] Elise Gould and Will Kimball, “‘Right-to-Work’ States Still Have Lower Wages,” Economic Policy Institute, Briefing Paper No. 395 (2015).

The Birth of Moral Leadership: How to be a Moral Leader in the Modern Society

In this paper, I deal with the following question: What does it mean for a person to be a moral leader in our modern society? But what does it mean for our society to have a moral leader? For the sake of this paper, I will make a subtle distinction between ethics and morals. By “ethics” I will generally mean: the external, theoretical principles informing one’s concept of right versus wrong that govern one’s behavior. When using the noun “morals” I will generally mean: the internal, practical activities an individual conscientiously and willfully engages in, activities that reflect one’s own internalized concept of right versus wrong. In layman’s terms, “ethics” has to do with theory, and “morals” has to do with practice.

Given the aforementioned definition of “morals,” what do we mean when we say “right versus wrong”? That is, when speaking of “morals,” what makes an action “right” and what makes an action “wrong”? And, when speaking of “ethics,” what makes a theory “right” and what makes it “wrong”? I do, as many other moral philosophers, believe that theory informs practice. One cannot, generally speaking, have morality without having an ethical rationale. Lawrence Kohlberg, a moral psychologist, discovered empirically that moral education was directly related to moral practice. The more educated one was in ethical theory, the more one would tend to act morally.[1] It is for this reason that I will now briefly attempt to articulate guidelines for an ethical theory which sheds insight on what is right and wrong conduct before developing my thoughts on moral leadership. The following pages, then, are not meant to be exhaustive and dogmatic; rather, in this paper, I merely seek to offer what I think are tentative guidelines for ethics and moral conduct. That is, I modestly can only hope that I will offer some insight on this most thorny of issues.

Ethical theory is not yet unanimously agreed upon or universalized: many ethical theorists do not even share agreement regarding basic elements that make up concepts of right or wrong. Since there are vast amounts of disagreement, as there are also oceans of numerous and contradictory theories, I will selectively articulate my own ethical theory that informs my moral actions.

I like to begin by dealing with J. S. Mill’s “utilitarianism.” This is, perhaps, one of the simplest ethical theories. Mill writes:

“Utility” or the “greatest happiness principle” holds that actions are right in proportion as they tend to promote happiness; wrong as they tend to produce the reverse of happiness. By happiness is intended pleasure and the absence of pain…[2]

Bentham’s phrase “the greatest good for the greatest number”[3] succinctly reflects this view. Mill would argue that a right action is one which produced, consequentially speaking, increased (relative to a prior state) amounts of pleasure. It is an empirical fact that humans have nociceptors (neurons that send pain signals). It is an empirical fact that humans also have opioid receptors and dopamine, responsible for pleasure sensations and anticipation of pleasure, respectively. It is not hard, objectively speaking, for one to develop an ethical theory regarding that which we should universally do or not do when it is grounded on such a universal fact of human anatomy: virtually all normal, functional human bodies experience pleasure and pain. We, intuitively, seek out pleasure and avoid pain. In fact, the majority of the time, humans mostly have a heightened awareness when it comes to perceiving anything that may cause us pain: we are constantly on the look out for avoiding anything that may result consequentially in the experience of pain.

Whenever anyone develops any kind of theory, it is always a good thing to ask oneself: Is this a model of the world? and Is this a model for the world? Utilitarianism certainly understands the first question well. Utilitarian theory is grounded in objective facts, facts that are accurately portrayed in its “model of the world.” But is it also a “model for the world”? Does it say something not only about what is, but also about how things should be?

In utilitarian ethics, that which causes pain is to be avoided; it is labeled “wrong.” On the other hand, that which causes pleasure is to be pursued; it is labeled “right.” But is this form of ethical reasoning a valid way for humans to think about how the world should be? That is, is utilitarianism a model for the world? Should we wish it to be a model for all of us? I can think of many reasons why utilitarianism alone cannot function as an exhaustive ethical theory. If pleasure is the greatest ethical principle guiding moral behavior, I would argue that, according to utilitarianism, Hugh Hefner and all drug abusers are clearly more ethical than the rest of us: for they alone experience dopamine at rates that most of us have never dreamed of. But maybe, just maybe, consequentially speaking, their actions do not lead to the greatest amount of pleasure for the greatest amount of people? Maybe the drug abuser, consequentially speaking, is going to end up suffering greatly at some future point in life? In other words, would not a utilitarian argue that his actions—namely, drug abuse—are not, consequentially, in the right? But how do we go about predicting the future? How do we detach ourselves from our current experience of pleasure and think about a theoretical, future experience of pleasure? In fact, is it even possible to have this kind of omniscient knowledge beforehand? For example, if sex is a pleasurable experience, and contraception works, why not engage in all kinds of sex acts with the most amount of people? Should I wait for a monogamous marriage and, hence, betray my own principles in favor of something unseen and not currently empirical? (That is, the theoretical, future monogamous marriage is not currently and empirically being experienced by the individual.) Should the utilitarianist ever place pleasurable experiences on hold for something else?

Utilitarianism fails to account for the conflicts which arise between “the greatest good” and “the greatest number.” As Nicholas Rescher has shown, it is possible that Bentham’s statement—“the greatest good for the greatest number”—can produce chaos. Take the following distribution scheme, for example:[4]

Scheme I Scheme II
A receives 8 units of happiness A receives 4 units of happiness
B receives 1 unit B receives 4
C receives 1 unit C receives 1


It should be quite evident that Scheme 1 is in accord with “the greatest good,” but Scheme 2 is in accord with “the greatest good for the greatest number.” So which do we honor? Which action is “right”?

Many more such critiques exist. However, it is beyond the scope of this paper to present all of them. Having said that, I would now like to deal with another famous ethical theory: Kantian deontology. It is to this ethical theory that I now turn.

Kantian ethical theory, a form of deontology, has to do with intents rather than consequences. Instead of focusing on the consequences of an action—such as the utilitarian consequentialists—the deontologist, specifically one of a Kantian bent, focuses on the intent behind the action in determining whether the action is right or wrong. If the action was intended to hurt an individual, but accidentally resulted in something positive, according to Kant, that action was not right. (According to some utilitarians[5] it would be deemed right nonetheless, since it ended up increasing pleasure, though the individual had different original intents!) Kant believed, unsurprisingly, that consequences never mattered. In fact, “It is not possible to think of anything in the world, or indeed out of it, that can be held to be good without limitation except a good will” (GMS 4: 3935-8).[6] Kant focused on the autonomous lawgiver, that is, the autonomous individual who followed in all of his or her actions a self-imposed moral law. In Kant’s famous categorical imperative, Kant set out to universalize his ethical theory. “Act only according to that maxim whereby you can, at the same time, will that it should become a universal law.” If an action cannot be universalized, it should not be committed. This may sound less practical than utilitarianism, but, I assure you, it is not. For example, I was once in an isolated part of northeastern Washington standing on a boardwalk over a lake. I was carrying a water bottle and became quite annoyed with it. I entertained the thought, for a split second, whether I should throw it into the pristine lake waters below. And then Kant spoke to me with that “still, small voice” of his: “Do you wish to universalize this action?” I certainly did not! “What would the lake look like if everyone dropped his or her waste into it,” I thought to myself. I ended up carrying the bottle for the rest of the trip.

But even the divine Kant has his problems. What if two categorical imperatives conflict with one another? What if I am placed in such a situation in which I must choose between one or the other? What if, to invoke Kant’s article On a Supposed Right to Lie Because of Philanthropic Concerns, an individual is faced with a choice between lying and murder, except in this case, whatever the reasons, it is a choice only between lying and murdering someone—you must commit one or the other. What do you do then? Even with Kantian ethics, we run into problems in determining what is the “right” thing to do. What if our intentions are always good and yet, strangely, our actions end up, consequentially, always harming others—are such actions “right”?

I find both utilitarianism (consequentialism) and Kantianism (deontology) useful. However, as one can tell, I also find both ethical theories to be problematic to an extent. How do I, as the moral individual, resolve these problems? In essence, I resort to a via media by attempting to reconcile the two by means of some form of compatibilism.

The theologian Dietrich Bonhoeffer was perplexed as well by this problem. However, in his theological work Ethics, he found a way out, finding inspiration in Jesus’ saying, “[E]very good tree bears good fruit, but a bad tree bears bad fruit” (Mt. 7:17, NIV):

There is an old argument about whether only the will, the act of the mind, the person, can be good, or whether achievement, work, consequence, or condition can be called good as well—and if so, which comes first and which is more important. This argument, which has also seeped into theology, leading there as elsewhere to serious aberrations, proceeds from a basically perverse way of putting the question. It tears apart what is originally and essentially one, namely, the good and the real, the person and the work. The objection that Jesus, too, had this distinction between person and work in mind, when he spoke about the good tree that brings forth good fruits, distorts this saying of Jesus into its exact opposite. Its meaning is not that first the person is good and then the work, but that only the two together, only both as united in one, are to be understood as good or bad.[7]

Like Bonhoeffer, I think that for an action to be morally right it must also be ethically right. That is, the ethical theory must be right, the intent must be right, and the consequential action must be right. The greatest good action is an action that produces the greatest good for the greatest number—according to empirical notions of pleasure and pain—being inspired by right intent; an action, at the same time, you would will to become a universal law.

How does all of this translate into helping us become better, moral leaders in a modern society? Moreover, having considered ethics and morality, I now turn to leadership: what does it mean to be a “leader”?

For the sake of clarity and simplicity, I will define “leadership” as the ability of an individual, functioning as a leader, to guide other individuals, functioning as followers, to act in accordance with the desired course of action of the leader. That is, a leader is able to get others to do what he or she desires that they should do. What, then, is moral leadership? Moral leadership, harkening back to our previous definitions, would entail the following definition:

The ability of an individual, functioning as a leader, to guide other individuals, functioning as followers, to act in accordance with the desired course of action of the leader; the “desired course of action” being informed by a theoretical ethic, which are the external, theoretical principles informing one’s concept of right versus wrong that govern one’s behavior. Such theoretical ethics are then acted upon and become moral habits, which are the internal, practical activities an individual conscientiously and willfully engages in, activities that reflect one’s own internalized concept of right versus wrong. The moral leader’s concept of right versus wrong is greatly influenced by the maxim: The greatest good action is an action that produces the greatest good for the greatest number—according to empirical notions of pleasure and pain—being inspired by right intent; an action, at the same time, you would will to become a universal law.

A moral leader, in my opinion, is inseparable from his[8] theoretical ethic (the “stuff” floating in his head) and his practical morals (the “stuff” everyone sees him doing). A moral leader is one who is aware of basic concepts regarding pleasure and pain. A moral leader is aware that not all utilitarian actions are “right.” He is aware that not all deontological actions are “right.” He is acutely aware of the problems one encounters when dealing with morality. However, a moral leader attempts to, nonetheless, strive to do the right thing. He formulates theories and rationales for his actions. He is the guy you find thinking long and hard about his actions and why he chooses to do them. And, most importantly, a moral leader guides others, influencing them to participate in his vision, a vision that he shares both passionately and with rationality with those who follow him. In inspiring others to act like him, to reason like him, to follow his desired course of action, the leader implicitly universalizes his morality. In doing so, one could only hope that he takes Kantian ethics seriously.

Since I have offered my thoughts on moral leadership, I would now like to focus somewhat more specifically on practical ways a leader goes about bringing his “desired course of action” to fruition. In the following paragraphs, I will engage with the popular Bennis and Goldsmith text, Learning to Lead.

Bennis and Goldsmith believe that all successful leaders have the following six “competencies”[9]:

  • Mastering the Context. Leaders are able to get a feel for their surroundings and understand “the big picture.”

  • Knowing Yourself. The leader is aware of his or her ethical commitments, subjective worldviews, being always aware of who he or she is. Such leaders are also always learning about themselves.

  • Creating a Vision. Leaders create a vision so real that “they live and breathe” it.

  • Communicating with Meaning. Leaders are able to understand and function at the level of their followers.

  • Building Trust Through Integrity. Leaders lead ethical lives that those who follow them witness on a daily basis. They are consistent with their actions.

  • Realizing Intentions Through Actions. Leaders are able to bring their ideas to fruition by making them concretely real.

Many of the above “six competencies” are quite self-explanatory; therefore, I will not pedantically engage in making superficial commentary. Rather, I will focus my remarks on a couple of them while discussing things that I believe are of utmost importance, especially for a moral leader in the modern society.

With the continuing increase in technological development—think of social media, the Internet, cell phones, etc.—humans have begun to create a context that is vastly different than all previous contexts in history. We are now living and leading in a society in which a follower may never physically meet a leader; in which relationships between boyfriend and girlfriend may span oceans and be entirely virtual. The landscape upon which we now act has become something else. How does a leader function within the present structures set in place? What is specifically different about our modern society? It seems to me that communication and human relations have now become de-personalized. A Black Lives Matter activist may use all kinds of tools that were not available to Martin Luther King, Jr. This sort of de-personalization comes with its pros and cons. We can have a leader spread her message using social media far beyond her immediate surroundings. But this comes with a cost. Such communicating lacks many features that are necessary for a leader to be successful. I may see a talking head on Facebook. The message may even inspire me. And I may not do anything about it. How do I know that she is telling the truth? How do I know that she really will do what she is claiming she will do? Do I even understand her message? What if I have questions for her but cannot bring them to her since I am not able to communicate with meaning with her? The modern leader is faced with the problem of communicating with meaning. It is common today for all kinds of quotes to be taken out of context. With the creation of Twitter’s 140-character tweets, human beings are now expected to “communicate” messages in under two or three sentences. The twitterization of human language and communication is a death sentence to a modern orator striving to be a Lincoln or a Demosthenes. What is the solution to this problem? One possible response is that we adapt to this. We may simply have to strive to say as much as possible without becoming verbose. Another option is that we try to communicate using a different platform, something akin to TED Talks.

This twitterization of language has also deformed the way we listen and hear one another, too. A leader has to understand the people he is striving to engage. With the little information people are communicating these days, it’s helpful to restate to the other person, in your own words, what you heard him/her say. This allows the leader to clarify any misunderstandings. At all points must one recognize the subjectivity of one’s audience and his/her own subjectivity. Terms and phrases such as socialist, goodness, the right thing, etc. may mean vastly different things to different people. It would be a good idea to have people define thorny terms. Robert Franklin reminds us, “Conversation is the highest form of human activity.”[10] It’s a good idea to communicate meaningfully.

One cannot have a paper on leadership and ethics without making recourse to Aristotle’s Nicomachean Ethics. People sometimes forget the basic advice Aristotle left us: “by doing just things we become just…”[11] It’s a pithy truth. One of the ways a leader builds trust is by integrity. And integrity means nothing less than being undivided, consistent, honest, and morally upright. But as with all virtues, one must practice a life of virtue in order to be considered virtuous. To be known as an honest person, one must consistently practice being honest. To be a moral leader, and to be known as one, is to consistently act like one.

While this paper is not as exhaustive as one may like—and many theoretical (and maybe practical) scenarios have not been considered—my hope has been to present a definition of moral leadership that would generally work for many people. My goal has not been to offer some dogmatic truth; rather, I have sought to offer my thoughts on a thorny subject, thoughts which I hope may stimulate my reader to make whatever progress one could towards becoming a moral leader him- or herself.

Written by: Moses Y. Mikheyev




Allison, Henry E. Kant’s Groundwork for the Metaphysics of Morals: A Commentary. New York: Oxford, 2011.

Bartlett, Robert C., and Susan Collins, trans. Aristotle’s Nicomachean Ethics: A New Translation. Chicago: University of Chicago Press, 2011.

Bennis, Warren and Joan Goldsmith. Learning to Lead: A Workbook on Becoming a Leader, 4th ed. New York: Basic Books, 2010.

Bonhoeffer, Dietrich. Ethics. Dietrich Bonhoeffer Works. Volume 6. Translated by Reinhard Krauss, Charles C. West, and Douglas W. Stott. Minneapolis: Fortress Press, 2005.

Franklin, Robert M. Liberating Visions: Human Fulfillment and Social Justice in   African-American Thought. Minneapolis: Fortress, 1990.

Gielen, Uwe. “Kohlberg’s Moral Development Theory.” In The Kohlberg Legacy for the Helping Professions. Lisa Kuhmerker. Birmingham: Doxa Books, 1991.

Lebacqz, Karen. Six Theories of Justice. Minneapolis: Augsburg, 1997.

Mill, John Stuart. Utilitarianism. New York: Bobbs-Merrill, 1957.



[1] Uwe Gielen, “Kohlberg’s Moral Development Theory,” in The Kohlberg Legacy for the Helping Professions, Lisa Kuhmerker (Birmingham: Doxa Books, 1991), 35, 55.

[2] John Stuart Mill, Utilitarianism (New York: Bobbs-Merrill, 1957), 10.

[3] Karen Lebacqz, Six Theories of Justice (Minneapolis: Augsburg, 1997), 25.

[4] Table adopted from Lebacqz, Six Theories of Justice, 25.

[5] Most certainly the “act” utilitarians. The “rule” utilitarian may object at this point.

[6] Henry E. Allison, Kant’s Groundwork for the Metaphysics of Morals: A Commentary (New York: Oxford, 2011), 71.

[7] Dietrich Bonhoeffer, Ethics, Dietrich Bonhoeffer Works, Volume 6, trans. Reinhard Krauss, Charles C. West, and Douglas W. Stott (Minneapolis: Fortress Press, 2005), 51. Italics original.

[8] This is an all-inclusive “his.” I could not come up with a gender-neutral way of articulating the following sentences without making them sound cumbersome and pedantically politically correct.

[9] Adopted from Warren Bennis and Joan Goldsmith, Learning to Lead: A Workbook on Becoming a Leader, 4th ed. (New York: Basic Books, 2010), xxi-xxii.

[10] Robert M. Franklin, Liberating Visions: Human Fulfillment and Social Justice in African-American Thought (Minneapolis: Fortress, 1990), viii.

[11] Robert C. Bartlett and Susan D. Collins, trans., Aristotle’s Nicomachean Ethics: A New Translation (Chicago: University of Chicago Press, 2011), 27.