Busting HBCU myths with data

By Jeneé Osterheldt and Tyler Dukes

There’s a long-standing myth that Historically Black Colleges and Universities, or HBCUs, do a poor job graduating their black students.

According to U.S. Department of Education data, only 4 out of 10 black students graduate “on-time” — that is, within six years of starting their freshman year.

Weighted average of graduation rates for black students at 84 HBCUs reporting to the U.S. Department of Education as of 2014 within six years of their start date, or 150 percent time. SOURCE: Integrated Postsecondary Education Data System, PartNews analysis

Compared with colleges and universities overall, the number of black students who graduate on time is closer to 5 in 10.

Weighted average of graduation rates for black students at 1,671 colleges and universities, including HBCUs, reporting to the U.S. Department of Education as of 2014 within six years of their start date, or 150 percent of the time. SOURCE: Integrated Postsecondary Education Data System, PartNews analysis

So what’s the deal?

Jay Z says numbers don’t lie, but they don’t exactly paint the whole Picasso either. It might seem like HBCUs have a low grad rate — but it’s just not that simple.

If you plot graduation rates for black students against the percentage of first-generation students at a college or university, it looks a little something like this.

Approximate plot of percentage of first-generation students (horizontal axis) vs. graduation rates for black students (vertical axis) in 2015 for about 1,600 colleges and universities reporting to the U.S. Department of Education. SOURCE: US DOE College Scorecard, PartNews analysis

The general trend is that the higher the percentage of first-generation students, the lower the graduation rate.

And that’s an important relationship, because when we look at where HBCUs fall on this plot, they tend to be scattered around here, toward the lower end of the graduation rates and the higher percentage of first-generation students.

Approximate locations of 85 HBCUs inplot of percentage of first-generation students (horizontal axis) vs. graduation rates for black students (vertical axis) in 2015 for about 1,600 colleges and universities reporting to the U.S. Department of Education. SOURCE: US DOE College Scorecard, PartNews analysis

On average, about 43 percent of students enrolled in HBCUs are first-generation. Compare that to about 36 percent for colleges overall.

Another factor: Money. According to a Pell Institute study students from families in the top quartile (over $108,650) are eight times more likely to hold a college degree than a kid from the bottom quartile (under $34,160). About half of the nation’s HBCUs have a freshman class where three-quarters of the students are from low-income backgrounds.

About 50 percent of the nation’s HBCUs have a freshman class where 75 percent are from low-income backgrounds.  SOURCE: Pell Institute

But just 1 percent of the 676 non-HBCUs serve as high a percentage of low-income students.

That bag makes a difference. Not to mention, the schools themselves see less resources.
According to the Thurgood Marshall College Fund, HBCUs have one-eighth the average size of endowments than historically white colleges and universities.

And consider the open-admission policy. HBCUs are more likely to accept students with lower grades and SAT scores than other institutions. The Post Secondary National Policy Institute found that over 25 percent of HBCUs are open admission institutions compared with 14 percent of other colleges and universities.

Despite the odds, HBCUs still make a major difference to their student bodies. These schools, which on the surface may seem to do a poor job at graduating black students, helped create the black middle class. At least that’s what U.S. Commission On Civil Rights report says.

Historically Black Colleges and Universities have produced 40 percent of African-American members of Congress, 40 percent of engineers, 50 percent professors at PWIs, 50 percent lawyers and 80 percent of judges.

And to think, HBCUs only represent 3 percent of of post-secondary institutions. Just saying: imagine what these schools could do with more funding and support.

Long live black excellence.

Movie Success: Is it in the Data?

By Dijana, Maddie, and Sruthi 

Despite all of the talk every year about how out of touch the entire award ceremony and results are, everyone in the film industry wants to win an Oscar. It’s the most prestigious award in the industry, signifying the recipient being at the top of the field. Regardless of the flaws of the voting process, which is completely subjectives given the voting constituency, the number of Oscars a film wins is more often than not the main measure of a film’s success.

But is there some other factor that contributes to that success? Do the film critics sway the voters? Does public sentiment push movies into Oscar contention? Is there some correlation between the revenue of a movie and it’s Oscar potential? Does spending more on the film lead to more wins?

Using data sets that included quantitative data from IMDB, the American Film Institute, and Box Office Mojo, it’s clear that some factors have a stronger relationship with Oscar wins than others.

(Budget) Size Doesn’t Matter

Click to see full image

After charting the relationship between Oscar wins and the adjusted budget of each film released from 1928 to 2010 (with some omissions due to the incomplete data set), it’s clear that the cost of the movie has no bearing on its overall success. Very rarely are the big budget movies major winners at the Academy Awards. In fact, only Titantic, a film that cost approximately $200M to make, has a significantly large budget in film terms.  

There may be many inferences to make from the data, such as the fact that the most expensive movies tend to be summer releases geared for the general population as opposed to the serious film crowd. Budgets may be increasing for films over the last several decades (see graph below), but the number of films winning a significant amount of awards has not increased. However, given the lack of available data, these points remain speculation.     

Click to see full image

Mo’ Money, Mo’ Oscars?

The size of a budget may not signal a greater probability for a film to win more Oscars, but about overall revenue? Does the box office success matter for the Academy voters?

Looking at the data from Box Office Mojo of the 25 movies released before 2011 with the highest domestic grosses, adjusted for inflation, it’s clear that there is no significant relationship between revenue and Oscar wins.

Click to see full image

Academy members aren’t swayed from box office smashes when it comes to choosing Best Picture. Though there are a few outliers, such as Gone with the Wind and Titanic, that have earned a significant amount of revenue at the box office as well as Academy Awards, for the most part more revenue does not signify more Oscars. In fact, 6 of the 25 movies did not win a single Oscar, and three only won one Academy Award.

The People vs. Oscar Wins

In the film industry, an Oscar is an incredible achievement, signifying the quality of the end product and the work that went into creating it. But does the public see these films in the same way? Just how popular are the most successful movies?

Using data from the IMDB database, the rating of each film (which any IMDb.com visitor can vote on) was compared to the total Oscar wins:

Click to see full image

Surprisingly, the most popular film on IMDb according to the public rating at 9.2 out of 10 is The Shawshank Redemption, a film that while nominated for seven Academy Awards, came away empty handed. On the other hand, two of the three films with the most Oscar wins (11), Titanic and Ben-Hur failed to make the top 250 ratings on the website, with ratings of 7.7 and 5.7, respectively. The Lord of the Rings: Return of the King is one of the few films that bucks this trend, with a rating of 8.9 and 11 Oscar wins.

But What Do the Experts Think?

When it comes to measuring the success of a film, one major group has been ignored thus far: film experts, including historians and critics. To get a better sense of how much expert opinion matches up with the Academy’s, the American Film Institute’s list of top 25 movies of all time (up to 2010) was used as the primary source for analysis:

Click to see full image

Much like the previous analyses, the number of wins does not match up well with the ranking. While several movies, such as Gone with the Wind, Lawrence of Arabia, and On the Waterfront, each won several awards and were ranked in the top 10, the top film of all time, Citizen Kane, only won one Oscar. Even more surprising, several of the top 25 films, including Singin’ in the Rain, Psycho, and It’s a Wonderful Life received zero Academy Awards.

Of course, several caveats must be made with the data. The number of total categories and therefore possible wins has increased substantially from the first Academy Awards in 1928. Similarly, there is no way to determine the competitiveness of the field in a given year. There’s no way of knowing if a film that is highly regarded by critics and the public would have won more awards if it was released in another year.Additionally, not all Oscars are created equally, and more weight may need to be applied to categories like Best Picture and Best Director over others.

Despite the issues with the data, one thing remains clear: Oscar wins may be a measure of success for the industry, but it very little, if any, evidence that several criteria matter when it comes to predicting success. So instead of trying to use an IMDB rating to predict the next Oscar winner, it may be better to just guess blindly.

Anti-Semitic Incidents in MA: A Tale of Two Data Sources

By AAAD (Arthur, Anne, Anne, Drew)

“Data-driven” is the theme of the modern age. From business decision-making to policy changes, from news stories to social impact evaluations, data is a foundational building block for many organizations and their causes. However, while we would like to think that all data represent absolute truths, the real world presents many challenges to accurate reporting.

Our team was motivated by the question: How has the prevalence of anti-Semitic incidents in Massachusetts changed over the past several years?  In our exploration of this question, we learned an old but important truth when you see data, dive deep and make sure you understand how the data collection methods could affect the resulting data.

To begin our exploration of anti-Semitic incidents in MA, we looked into two sources: the Anti-Defamation League (ADL) and Massachusetts Executive Office of Public Safety and Security (EOPSS). To begin with, we noticed obvious discrepancies in the annual totals of anti-Semitic incidents reported by the two sources:

Anti-Semitic incidents in Massachusetts

Year ADL EOPSS
2015 50 40
2014 47 36
2013 46 83
2012 38 90
2011 72 92
2010 64 48

Source: ADL press releases and EOPSS data from a FOI request

After seeing these discrepancies, we decided to dig deeper and try to understand what might account for the differences.  We began by investigating how the data is collected and then comparing differing statistics between the two sources.

EOPSS’ approach and its implications

Massachusetts passed its first hate crime legislation in 1991, but not every agency has adhered to it. According to reports from the Massachusett’s Executive Office of Public Safety and Security (EOPSS), the state did not begin tracking non-reporting agencies until 2005.

The Massachusetts “Hate Crimes Reporting Act requires that the number of hate crimes that occur in the Commonwealth be submitted to the Secretary of Public Safety annually, even if that number is zero. (M.G.L. c. 22C, § 32).” Nonetheless, as late as 2014, some districts were not reporting this statistic. The FBI also compiles hate crime data, though submitting this information is voluntary. Some Massachusetts agencies that have failed to report hate crime data to the FBI have stated they did not realize the FBI had even requested the information.

The accuracy of hate crime reporting data can be influenced by a number of factors, including record keeping procedures within a given agencies and whether or not officers are trained to inquire about factors that qualify crimes as hate crimes.

When agencies do not report data to the state, any hate crimes recorded in the populations in those districts are not represented by the official state statistics. Agencies that have zero hate crimes should report zero hate crimes to the state (These are designated as “zero-reporting” agencies in official reports). A further complication in determining trends can occur when formerly non-reporting agencies begin to report incidents of hate crime if the number is not zero.

Data collected by Massachusetts indicates the population covered by agencies that did not report hate crime statistics grew from roughly 66,000 in 2011 to over 300,000 in 2014.

Massachusetts has recently taken steps to increase the public’s ability to report hate crimes, setting up a hotline in November of 2016. Some police districts also have a designated Civil Rights Officer to handle hate crimes.  

The issues raised by non-reporting are far from academic. When national tragedies occur, one reaction may be in an increase in hate crimes against particular populations. In these cases, hate crime statistics can provide insight about the implications for local communities.

In the wake of the 2016 presidential election, Bristol County Sheriff, Thomas Hodgson, called for the issue of arrest warrants for elected officials of “sanctuary cities.” This prompted Somerville mayor, Joe Curtatone, to defend the legality of sanctuary cities and refer to Sheriff Hodgson as a “jack-booted thug.” He further taunted Hodgson to, “come and get me.” These flare ups between public officials indicate the tension that has formed in the public sphere around the issue of immigration.

Hate crime reporting statistics can provide a tool to measure claims of anti-immigrant-related incidents and provide the public with a sense of whether these incidents are on the rise. Massachusetts has responded to concerns about an increase in hate crimes by setting up a hate crime reporting hotline.

Official statistics from police departments and college campuses can bring clarity to the issue, but Massachusetts must both require and enforce reporting mandates as well as provide training to local agencies to improve and standardize the reporting of these statistics.

ADL’s selected approach and its implications

Another source of data on Massachusetts anti-Semitic crimes comes from the Jewish NGO, the Anti-Defamation League (ADL). The ADL was founded in the United States in 1913 and aims to “stop anti-Semitism and defend the Jewish people,” according to their website.

Since 1979, the ADL has conducted an annual “Audit of Anti-Semitic Incidents.” The ADL’s data partially overlaps with official data — they use data from law enforcement — but they also collect outside information from victims and community leaders.

The limitations in the ADL’s audits are like those of any audits trying to cover anti-Semitic crimes. The way the ADL handles them, however, should carefully be noted as it greatly affects the resulting numbers.

First of all, unlike the official data, the ADL also includes non-criminal acts of “harassment and intimidation” in its numbers, which encompasses hate propaganda, threats, and slurs.

Another key difference from the official data is that ADL staff attempt to verify all anti-Semitic acts included in the audit. While crimes against institutions are easier to confirm, harassment against individuals that are reported anonymously provide unique challenges for verifying.

Additionally, some crimes are not easily identifiable as anti-Semitic even though they may be. In their annual audit, the ADL considers all types of vandalism at Jewish institutions to be anti-Semitic, even without explicit evidence of anti-Semitic intent. This includes stones thrown at synagogue windows, for example.

On the other hand, the ADL does not consider all swastikas to be anti-Semitic. As of 2009, they have stopped counting swastikas that don’t target Jews, as it has become a universal symbol of hate in certain cases.

The ADL also does not count related incidents separately. Instead, crimes or harassment that occurred in multiple nearby places at similar times are counted as one event.

All of these choices made by the ADL greatly affect the numbers that they produce each year.

Comparing and contrasting the results of the two methodologies

Numbers can tell different stories depending on the choices and circumstances surrounding the ADL and the EOPSS’ hate crime data collection processes.  To demonstrate this, we compare some of the conclusions between the two datasets for anti-Semitic hate crimes in Massachusetts.

Starting small: One location claim

One of the ADL’s figures for 2013 indicated that 28% (or 13) of the 46 total anti-Semitic incidents that year took place on a school or college campus.  If we look for the same percentage in the EOPSS data, we find a similar 29% of reported 2013 incidents occurring on a school or college campus.  

This single summary seems to bode well for comparisons between the two datasets: however, things get a little hazier when you look at the absolute numbers.  Instead of 13 out of 46 total incidents, the EOPSS data reported 24 out of 83 incidents on a school or college campus, and it’s unclear what accounts for the difference in scale.

Time trends in reports

If we look at time trends, 25% of the anti-Semitic incidents in Massachusetts reported by the ADL in 2014 occurred in July and August, while that figure was 8% for the same time period in 2013.  

That “marked increase” in anti-Semitic incidents was attributed to the 50-day Israel-Gaza conflict that took place from July 8 to August 26, 2014 by ADL’s New England Regional Director saying, “This year’s audit provides an alarming snapshot into anti-Semitic acts connected to Operation Protective Edge over the summer as well as acts directed at Jewish institutions.  The conflict last summer demonstrates how anti-Israel sentiment rapidly translates into attacks on Jewish institutions.”

If we look at EOPSS data for 2013 and 2014, however, there appears to be no sign of a marked increase in anti-Semitic incidents recorded in the summer months — in fact, in absolute numbers, both incidents in July/August and incidents in the entire year decreased from 2013 to 2014 in the EOPSS data.  

Because the ADL does not provide their underlying data to the public, we can’t dig into the stories of the specific incidents in July/August 2014 and see if they could indeed be a result of the Israel-Gaza conflict.  Additionally, with not-particularly-scientific or consistent reporting methodologies, it’s hard to make concrete conclusions from either of these datasets.

Incident types: Differences might be explained by differing reporting policies

Thus far, we’ve identified contradictions between the two datasets, but have not been able to discern how the two data collection methods may have specifically contributed to those contradictions.  

One topic where we can attempt to do so is the matter of vandalism:

According to the annual ADL audit, 16 of the 46 anti-Semitic incidents in Massachusetts in 2013 (35%) involved vandalism.  The same figure from the ADL for 2014 was 23 vandalism incidents out of 47 total anti-Semitic incidents in Massachusetts (49%).  In EOPSS’ numbers, however, vandalism looks like an even larger portion of anti-Semitic incidents in Massachusetts.

As discussed previously, the ADL reports all vandalism of Jewish institutions as anti-Semitic incidents, but does not count all vandalism including swastikas as anti-Semitic incidents in their data.  Although not directly specified, the EOPSS datasets likely do categorize all reports of swastikas as anti-Semitic vandalism, which would be a possible explanation for the large discrepancy in percentages (on top of the simple explanation that with numbers of this magnitude and lack of precision, variations are inevitable).

Do Data Due-Diligence!

Investigating the discrepancies and the data collection methodologies was not merely an academic exercise: it demonstrates that this is a necessary step to understanding what kinds of conclusions you can reasonably draw from your data and what kinds of caveats you should include when reporting or making decisions based on that data.  

Using only one dataset without exploring how the data was collected and digging into the details of the data could yield very different headlines:

Blindly using ADL data might yield: “Anti-Semitic hate crimes in Massachusetts increase 2% from 2013 to 2014.” (This was just 46 to 47 — is it really reflective of the situation to call that a 2% increase? Does this reflect the reality?)

Blindly using EOPSS data might yield: “Massachusetts became safer for Jewish people in 2014: anti-Semitic hate crimes dropped 43% from 2013.”  (Is this message true, or is this “trend” due to data collection issues? Why does it paint such a different picture from the ADL data?)

Do your data due-diligence.

Posted in All

Where are Pulitzers Won?

Yesterday saw the announcement of the 2017 Pulitzer Prizes. Awarded in some form or another for one hundred years, the Pulitzers represent the peak of journalistic recognition as well as literary and musical accomplishment.

Though the categories celebrating journalism have shifted somewhat over the years, the Pulitzers have long recognized quality reporting at all levels, from the local to the international. So what can analysis of who won the awards tell us about the geographic spread of successful journalism?

For this assignment I analyzed where four different categories of Pulitzers were awarded over the course of the last century. First, I looked at the Pulitzer for Local Investigative Specialized Reporting, a category awarded since 1964. Scraping the data from a list on Wikipedia, I calculated the number of awards given to titles in each U.S. state, and used the visualization tool Datawrapper to display the results:

backup link: //datawrapper.dwcdn.net/1rbUv/2/

23 out of 50 states have seen a title win a Pulitzer for local reporting – a decent geographic spread. Next I looked at the prizes for National and International Reporting respectively:

backup links: //datawrapper.dwcdn.net/399c2/2/

//datawrapper.dwcdn.net/kE4ZQ/2/

As these charts show, larger states have tended to dominate the National and International categories, which makes sense given the consolidation of resources in large bureaus, particularly in New York and Washington. For international reporting especially, New York dwarfs all other states, accounting for well more than half of all International Pulitzers.

Yet the Public Interest category, displayed below, shows much more geographic diversity. Though New York and California, as large states, still lead the way with 10 prizes each, Putlizers for work in the public interest have been awarded to fully 31 states plus DC, and states like North Carolina (6 awards) and Missiouri (4) have been frequently recognized.

backup link: //datawrapper.dwcdn.net/esVb3/1/

This analysis suggests that while major titles like the New York Times and Washington Post have long lead the way with their hard-hitting reporting at the national and international levels, for a century now, newspapers at every level and in a majority of states have performed award-winning journalism in the public interest. These local titles, exposing municipal corruption and state-level scandal, are the backbone of American journalism and – facing the most danger from the loss of advertising revenue and corporate consolidation – are most in need ongoing financial support.

Climate change & terrorism: The data

Last November, Presidential candidate Bernie Sanders raised some eyebrows when he said, “…climate change is directly related to the growth of terrorism. If we do not get our act together and listen to what the scientists say, you’re gonna see countries all over the world — this is what the CIA says — they’re going to be struggling over limited amounts of water, limited amounts of land to grow their crops, and you’re going to see all kinds of international conflict.”

Since then, a number of media outlets have fact-checked this statement, and PolitiFact has rated this comment as being Mostly False. You can read about PolitiFact’s full analysis here.

While Sanders’s comments were perhaps too direct in establishing a causality relationship between climate change and terrorism, he’s not alone in connecting the impact of climate change as a destabilizing force that terrorist organizations can take advantage of. The Defense Department mentions climate change as a “threat multiplier” in a 2014 report, and Al Gore has been quoted numerous times how the Syrian Civil War was caused by extreme drought conditions, which were caused by climate change.

While intuitively, these arguments make logical sense, other than anecdotal one-off instances (i.e. drought in Syria led to Syrian Civil War, drought in Nigeria led to Boko Haram, etc.), what has lacked is a comprehensive review of extreme weather conditions globally in recent years, and whether geographies facing the worst impact of climate change has seen an increase in terrorist activities. Based on Sanders’s statements, this seems like a reasonable assumption to make.

The first place to look was at where climate change was hitting the hardest in recent history. Mapped below is a heatmap of the impact of extreme weather events on the population. The higher number, the great percentage of the population that has been impacted by extreme weather such as drought, floods, etc.

Data source: IMF. Extreme weather impact on percentage of population, 1990-2009

An interactive version of the map is here: https://public.tableau.com/profile/publish/Apr11/Story1#!/publish-confirm

Swaziland, Malawi, China, Niger, and Eritrea are countries who have populations most impacted by severe weather conditions. If Sanders’s comments hold true, we should also see the highest number of terrorist activities in those countries in recent history. Mapped below is the number of casualties from terrorist incidents since 1980. Casualties were plotted here instead of number of incidents to show the severity of terrorist activity.

https://public.tableau.com/views/Terrorismvs_GDP/Dashboard1?:embed=y&:display_count=yes

It is immediately apparent that those 5 countries do not have anywhere near the highest number of terrorist casualties in the past two to three decades.

Also included in the interactive map for context is the percentage change in GDP year over year to potentially show the amplifier impact of climate change, as well as poor economic conditions on terrorist activity. However, based on the data that is presented, no direct relationship can be easily seen between both climate change, and economic health on terrorist activity. Sanders’s comments don’t hold up against the data. Instead, as Time and PolitiFact have indicated, there seems to be many other factors that contribute towards terrorist incidents.

Posted in All

Energy and Independent Thought

This was a really long process! I explored debunking a number of widespread beliefs/myths–environmental legislation costs jobs, people in the US pay the highest taxes, access to family planning increases abortion. The answer to most of these, was not in a single fact that was true or false, but in a framework of beliefs and approaches to viewing the world that foster the acceptance of particular arguments.

Environmental legislation and regulation: It’s complicated–but not really.

Upshot: The majority of new jobs in the energy sector domestically and internationally will be renewable energy jobs and related support services, but there will be and are “losers” (i.e., coal industry miners and related support services). Pro-environment legislation and state subsidies to businesses and private citizens in states like California have created a booming renewable energy market and spurred technological innovation. States like Ohio and West Virginia, which have actively resisted moves toward renewable energy are not well-positioned to take advantage of emerging production or related support jobs in this sector. Renewable energy and related jobs will be global economic drivers regardless of whether or not the Unites States chooses to participate. Other countries have made independent decisions regarding their energy futures and chosen to pursue renewables. Independent, publicly-funded research and policy making efforts attempt to analyze complex issues in a non-partisan manner, but the media (and research) ecosystem is pervaded by partisan think tanks, private money, and even foreign dollars. These factors undermine the credibility of sources of information and correspondingly public trust in those sources.

Why so complicated? 

1. Energy is actually complicated – Energy use and production affects individuals, businesses, corporations, and governments–each with competing and sometimes overlapping interests.

2.  Poor-quality or biased [mis]information – A decrease in publicly-funded science and non-partisan governmental organizations resulting in less transparent, less reliable sources of information. Prior to neoliberal globalization, government agencies, universities, and think tanks were much less affected by corporate interests and lobbying (see Science-Mart for a particularly in-depth treatment of the topic).

The energy sector is an appropriate case study of a phenomenon that has occurred in other previously public areas (i.e., education).

Both renewable and fossil energy lobbying groups and think tanks have a strong presence. These groups are motivated to generate research and studies supporting their economic aims. When the organization is private, the public cannot access records of financial contributions and identify donors. This undermines transparency and credibility.

Domestically, [political, social, economic] a variety of actors have invested themselves in the creation of a research and development framework that favors private, corporate interests over the public good and civic society.

The Think Tank Watch project at the University of Pennsylvania classifies and tracks think tank activity. The think tank picture is further complicated as foreign governments buy influence in American think tanks.

Think Tank Project’s Classification System               http://repository.upenn.edu/cgi/viewcontent.cgi?article=1011&context=think_tanks

An energy-related project, Think Tank Map, tracks think tanks that conduct climate-related research. The Buckeye Institute is a conservative, pro-fossil fuel private organization with an annual budget of 2.7 million dollars. Members of its Board of Trustees have ties to the fossil fuel and plastics industries. This organization generates reports that would fail to meet basic academic standard as objective research. There’s nothing wrong with lobbying for policy, but to do so under the guise of an “independent think tank” misrepresents the mission of the organization and undermines the work of independent, publicly-funded think tanks. Pro-renewable think tanks and lobbyists further cloud the picture with reports that lobby for their interests.

Energy policy set by Washington only influences but does not dictate the paths followed by other countries and transnational corporations. Failure by Washington to invest profitable sectors, including renewable energy, will have a long term effect on the American economy.

Taxpayers have invested in their own futures and the success of businesses by funding the underlying physical (i.e., highways) and virtual (i.e., the internet) infrastructure upon which corporations rely. Independent and transparent research and organizations are critical to the continuation of cycle of success.

Posted in All

Female drivers, safe roads

Stereotypes about women’s tasks and (lack of certain) skills are very dominant in Serbia. These prejudices and sexist believes are often supported by women themselves, as if they gain power by agreeing with common views and men’s perspectives. My goal is to write a short and simple way of arguing the opposite and perhaps offering an alternative narrative that hopefully could be accepted without seeming “threatening” to the existing manhood mentality. In specific I focused on debunking the myth that women are inherently bad drivers.DijanaM Truth and Truthiness

Posted in All

Sex education should include safe sex practices

Find 1-min video w/ narration here:
https://www.dropbox.com/s/9jfc8qabqhzsefr/safe_sex_ed.pptx?dl=0
– Play by opening the PowerPoint file, selecting ‘Slide Show’ and play ‘From Start’
– I was not able to make it a video format file, because screen recording messes up the sound.

Comments about the assignment:

Wow—what a challenging assignment! At first I considered everything from promoting 1-year federally funded maternity/paternity leave to increasing science funding for medical applications of psychedelics.  Then I started off trying to argue queer-inclusive sex ed and realized that I could only argue that to people already on board with safe sex-oriented sex ed in the first place. For those people, who are likely also to support queer rights, making sex ed queer-inclusive is likely more an awareness issue than a disputed issue.

So I landed on arguing for sex ed oriented around safe sex practices. I wanted to build a case around randomized controlled trials measuring the effectiveness on metrics like teen pregnancy and STI rates of abstinence-only programs vs. safe sex programs, and it turned out that either the data I wanted does not exist or is hard to come by. It is hard for studies to measure anything beyond self-reported sexual behavior.

Next, without the quantitative numbers to make a good argument, I wanted to make a rational argument for why sex ed should focus on safe sex practices. However, I felt like my rational arguments relied on value judgments that a conservative audience would not share. In the end I ended up using an appeal to authority, having read that 70% of Republicans do in fact trust the CDC as an organization.

Posted in All

Is Facebook really the best place to work in? It might not be, after all

I visited the new Facebook campus in Menlo Park, California last week. As everybody, I could be anything but impressed: the building is 430,000-square-foot, it is the largest open-workspace in the world, and was designed by Franck Ghery, who also designed Los Angeles’ Disney Concert Hall. The various activities offered (room dedicated for video games, nap area, for instance) and the wide range of food options encountered at every corner (hamburgers, coffee place, ice cream place, hot dogs stands, etc.) – definitely gives the wanderer the illusion to be in an attraction park more than in a working space.

Last year, employees on Glassdoor have voted Facebook the No. 1 company to work for overall. Even if Facebook has often been regarded as one of the best places to work in the tech industry, this article is meant to show that this model presents many downsides.

The reviews of the employees are actually very mixed, as it is shown:

https://www.indeed.com/cmp/Facebook/reviews

The Facebook campus model does enable any work-life balance

The Facebook campus is opened 24/7, and offers food all day long and nap areas to its employees. Some Employees admit that they tend to stay longer to work. A few are even sleeping at the office, and do not eat at home anymore

And employees have to stay connected all the time

“For six weeks out of the year, I’m on 24/7 on-call duty”; “You can never really leave work, even when you’re on vacation”; “Ungodly amounts of email from internal communications, 1,600 or more a day”; “At most companies, you put up a wall between a work personality and a personal one, which ends up with a professional workspace. The wall does not exist at Facebook”

http://www.indiatimes.com/lifestyle/technology/10-employees-talk-about-working-at-facebook-and-it-doesn-t-sound-all-that-pretty-252275.html

The firm culture can also be called into question

“A large company trying to act like a young one”; “I’ve seen decisions being made by interns”; “Looking too hard at Google”; “Working for Facebook sometimes means wasting a lot of time browsing Facebook”

http://www.businessinsider.com/the-22-worst-things-about-working-at-facebook-according-to-employees-2015-7/#ck-and-sheryl-imposing-a-holier-than-thou-attitude-13

Even if they benefit of many free services, their life-style is not as healthy as it used to be:

Using free bikes and the gym available on the campus replace the usual weekly exercise. However, most of the employees admit that going to the gym is not part of their regular schedule anymore, and that do not go as often as they used to before joining Facebook. Also, with free food and free snacks often in all the buildings, and at every floor, most of them start gaining weight as soon as they join the firm.

Facebook is accelerating the gentrification of Silicon Valley

Facebook offered employees 10,000$ to live close to the office, and “a lot of local families are going to get hurt” “

https://www.theguardian.com/technology/2015/dec/18/facebook-offers-employees-10000-to-live-close-to-the-office

The relationships between colleagues are altered

Facebook Employees have to become “Facebook friends” with their colleagues. Do your colleagues always have to be your friends? Some admits that looking the pictures posted daily by colleagues alter the image they have of them and their working relationship

The company does not manage its own growth well

In 2010, Facebook has 1,700 employees. In 2016, it has 11,996, and critics are raised about Facebook’s quick scale up, and inability to keep its start-up culture

https://www.fastcompany.com/3053776/how-facebook-keeps-scaling-its-culture

 

 

 

 

 

Posted in All