Thursday, May 14, 2026

Arctic river runs red following devastating Russian fuel spill

0

To view this video please enable JavaScript, and consider upgrading to a web
browser that
supports HTML5
video

Furious Russian President Vladimir Putin has ordered a state of emergency after a devastating fuel spill occurred in the Arctic.

Environmentalists say it is the worst such accident to happen in the Arctic as over 20,000 tonnes of diesel burst from a fuel tank at an industrial site.

The diesel reservoir collapsed at a power station outside the northern Siberian city of Norilsk last Friday, releasing 15,000 tonnes of fuel into a river and 6,000 tonnes into the soil, according to Russia’s state environmental watchdog.

The fuel was being stored there to ensure a continuous supply to a nearby power plant in case of an interruption to gas supplies.

Putin angrily criticised the delay in a cleanup operation because the authorities weren’t notified. However, Norilsk Nickel, the company that owns the collapsed fuel reservoir (through a subsidiary) insists it notified the proper agencies immediately.

An aerial view of the site of oil products spill into a river outside of Norilsk, Russia. (Credits: EPA)

Greenpeace Russia said the accident was the ‘first accident of such a scale in the Arctic’ and comparable to the Exxon Valdez disaster off the coast of Alaska in 1989.

Russia’s Investigative Committee said a power station supervisor has been detained and will be charged shortly as it conducts three probes into environmental pollution and safety violations.

The Ambarnaya River, which is affected by the spill, feeds into Lake Pyasino, a major body of water and the source of the Pyasina River that is vitally important to the entire Taimyr peninsula.

Russian fisheries agency spokesman Dmitry Klokov said restoring the polluted water system would take ‘decades’.

Rescuers working at the site of oil products spill into the Ambarnaya river outside of Norilsk, Russia. (Credits: EPA)
A rescuer pumping out pollutions from a large diesel spill in the Ambarnaya River outside Norilsk (Photo by YURI KADOBNOV/Marine Rescue Service/AFP via Getty Images)

‘The scope of this catastrophe is being underestimated,’ he told the TASS news agency, adding that most of the fuel had sunk to the bottom of the river and already reached the lake.

Marine rescue service has put up six oil containment booms in the Ambarnaya River to stop the diesel fuel going into the lake and was using special devices to skim off the fuel.

But the clean-up mission is being hampered by the lack of roads in the area and windy weather that has already caused blocks of ice to breach the barriers, releasing more fuel towards the lake, and forcing responders to reposition them, Malov said.

‘It’s swampy territory, and everything can only be delivered there on all-terrain vehicles,’ Malov said, predicting that the collected fuel will have to stay on site until the winter in special tanks.

Putin is furious at the delay in cleanup operations (Reuters)

Norilsk Nickel said the accident possibly occurred because the ground under the fuel reservoir subsided as the permafrost melted due to climate change.

WWF expert Alexei Knizhnikov said that while climate change does affect permafrost, the accident wouldn’t have happened if the company followed the rules.

This photograph released by the Marine Rescue Service of Russia shows a rescuer as he pumps out pollutions of a large diesel spill in the Ambarnaya River outside Norilsk. (Photo by YURI KADOBNOV/Marine Rescue Service/AFP via Getty Images)

By Russian law, there should be a containment structure around any fuel reservoir that would have kept most of the spillage on site, he said.

‘A lot of the blame lies with the company,’ he added.

The difficult terrain prompted some officials to suggest the collected fuel should be burned off at the scene, but Russia’s environmental watchdog chief Svetlana Radionova on Thursday ruled this out.



Source link

Come to office once a week or face pay cut: Maharashtra to govt employees

0



The has made it mandatory for all government employees to report to work once a week amid the lockdown, failing which they will have to face salary cut.


As per a notification issued by additional chief secretary (finance) Manoj Saunik, all government departments have been asked to prepare a roaster of officers and employees affiliated to them.



“All employees, except those on sanctioned leave or medical leave, will need to be in office for one day in a week compulsorily,” the order read.

The notification also stated that a disciplinary action will be taken by department heads against those employees who leave the office without permission during the lockdown.

 



It also warned that an employee will lose pay for an entire week if he/she remains absent during the assigned day.


However, in case an employee has to be present in the office for more than a day every week, his salary would be cut only on the days he remained absent, the notification said.


The order will come into effect from June 8. The coronavirus-triggered lockdown is in force till June 30.


The notification was issued after it came to light that employees were not reporting to work during the lockdown and some had even left for their hometowns.


At present, government offices are functioning with 5 per cent staff or 10 persons, whichever is more.



Source link

Why Most Americans Support the Protests

But with a confrontational and sometimes messy gale of protests appearing to gain broad support, there is evidence to suggest that the calculus is not always so straightforward. There were scattered incidents of looting and arson during the peak of the Black Lives Matter movement, most notably in Ferguson, Mo., where Michael Brown was killed, and Baltimore, where Freddie Gray died, yet sentiment swung heavily in favor of the movement.

And a separate study, from a three-person team of political and social scientists, found that the Rodney King riots of 1992 helped to mobilize liberal white voters and African-Americans in Los Angeles, leading to a leftward shift in some city policies.

Douglas McLeod, a journalism professor at the University of Wisconsin who studies the impact of news coverage on social movements, said people consumed a wider variety of information today, pointing in particular to social media. This can help to circumvent what he called “several conventions in media coverage of social protest that work against the protesters” — including a tendency to focus on instances of protester violence, even when they’re relatively rare, and to privilege the accounts of those in uniform.

Dr. McLeod said that as videos showing police brutality against black people have appeared relentlessly on social media, they have helped persuade skeptical Americans that an endemic problem exists. “When these things accumulate over time, and we start to see more and more of these images, the evidence starts to become more incontrovertible,” he said.

The current round of protests is youth-led, and so too, to some degree, is the shift in nationwide sentiment. Millennials and members of Generation Z are far more likely to say they believe the police are prone to racist behavior. And according to a PBS/NPR/Marist College poll last year, members of those generations were more than twice as likely to support reparations for slavery, compared with baby boomers and others in older generations.

A Pew survey in 2018 also found a stark generational divide over whether N.F.L. players were right to kneel in protest of racial inequality. Among millennials and teenagers in Generation Z, more than three in five expressed approval of the protests; among baby boomers and other older Americans, an equally large share said they disapproved.

Similar trends play out specifically among young black people and other people of color, who express a greater desire for sweeping change, and a more unanimous suspicion of the police. In a recent Washington Post/Ipsos poll of African-Americans, among respondents 35 and under, nine out of 10 said they did not trust the police to treat people of all races equally — higher than in any other age group.

Source link

Cory Booker Says He ‘Thought Twice’ Walking Home in Regular Clothes


Senator Cory Booker Says He ‘Thought Twice’ About Changing Into Regular Clothes to Walk Home | Entertainment Tonight


































Source link

Kirti Kulhari feels respecting nature is to know when to stop

Image Source : INSTAGRAM/KIRTIKULHARI

Kirti Kulhari feels respecting nature is to know when to stop

On the occasion of World Environment Day on Friday, actress Kirti Kulhari took some time to mull over nature and human existence. She feels humankind has been exploiting nature, and says the lockdown has taught us that we need very minimum resources to live.

“Respecting nature is about being grateful, having an attitude of gratitude towards everything that nature offers us. Everything in our existence is because of nature and I think it’s about recognising it and being thankful for it,” Kirti said.

“Respecting nature is to know when to stop. We as humankind have been exploiting nature, this lockdown has definitely taught us that we need very minimum resources to live and I hope we all now have understood its importance,” she added.

The actress feels as a “race, we, take everything for granted”.

“It is like a default setting in all of us. There is a lot for us to learn and to change in ourselves, to actually be worthy of living in this world,” Kirti added.

On the work front, Kirti was seen in the second season of “Four More Shots Please!”. The web series tells the tale of four unapologetically flawed women as they discover life while balancing friendship in Mumbai.

For all latest news and updates, stay tuned to our Facebook page

More Bollywood stories and picture galleries

Fight against Coronavirus: Full coverage



Source link

How to detect unwanted bias in machine learning models

0

In 2016, the World Economic Forum claimed we are experiencing the fourth wave of the Industrial Revolution: automation using cyber-physical systems. Key elements of this wave include machine intelligence, blockchain-based decentralized governance, and genome editing. As has been the case with previous waves, these technologies reduce the need for human labor but pose new ethical challenges, especially for artificial intelligence development companies and their clients.

The purpose of this article is to review recent ideas on detecting and mitigating unwanted bias in machine learning models. We will discuss recently created guidelines around trustworthy AI, review examples of AI bias arising from both model choice and underlying societal bias, suggest business and technical practices to detect and mitigate biased AI, and discuss legal obligations as they currently exist under the GDPR and where they might develop in the future.

Humans: the ultimate source of bias in machine learning

All models are made by humans and reflect human biases. Machine learning models can reflect the biases of organizational teams, of the designers in those teams, the data scientists who implement the models, and the data engineers that gather data. Naturally, they also reflect the bias inherent in the data itself. Just as we expect a level of trustworthiness from human decision-makers, we should expect and deliver a level of trustworthiness from our models.

[Read: How certification can promote responsible innovation in the algorithmic age]

A trustworthy model will still contain many biases because bias (in its broadest sense) is the backbone of machine learning. A breast cancer prediction model will correctly predict that patients with a history of breast cancer are biased towards a positive result. Depending on the design, it may learn that women are biased towards a positive result. The final model may have different levels of accuracy for women and men, and be biased in that way. The key question to ask is not Is my model biased?, because the answer will always be yes.

Searching for better questions, the European Union High Level Expert Group on Artificial Intelligence has produced guidelines applicable to model building. In general, machine learning models should be:

  1. Lawful—respecting all applicable laws and regulations
  2. Ethical—respecting ethical principles and values
  3. Robust—both from a technical perspective while taking into account its social environment

These short requirements, and their longer form, include and go beyond issues of bias, acting as a checklist for engineers and teams. We can develop more trustworthy AI systems by examining those biases within our models that could be unlawful, unethical, or un-robust, in the context of the problem statement and domain.

Historical cases of AI bias

Below are three historical models with dubious trustworthiness, owing to AI bias that is unlawful, unethical, or un-robust. The first and most famous case, the COMPAS model, shows how even the simplest models can discriminate unethically according to race. The second case illustrates a flaw in most natural language processing (NLP) models: They are not robust to racial, sexual and other prejudices. The final case, the Allegheny Family Screening Tool, shows an example of a model fundamentally flawed by biased data, and some best practices in mitigating those flaws.

COMPAS

The canonical example of biased, untrustworthy AI is the COMPAS system, used in Florida and other states in the US. The COMPAS system used a regression model to predict whether or not a perpetrator was likely to recidivate. Though optimized for overall accuracy, the model predicted double the number of false positives for recidivism for African American ethnicities than for Caucasian ethnicities.

The COMPAS example shows how unwanted bias can creep into our models no matter how comfortable our methodology. From a technical perspective, the approach taken to COMPAS data was extremely ordinary, though the underlying survey data contained questions with questionable relevance. A small supervised model was trained on a dataset with a small number of features. (In my practice, I have followed a similar technical procedure dozens of times, as is likely the case for any data scientist or ML engineer.) Yet, ordinary design choices produced a model that contained unwanted, racially discriminatory bias.

The biggest issue in the COMPAS case was not with the simple model choice, or even that the data was flawed. Rather, the COMPAS team failed to consider that the domain (sentencing), the question (detecting recidivism), and the answers (recidivism scores) are known to involve disparities on racial, sexual, and other axes even when algorithms are not involved. Had the team looked for bias, they would have found it. With that awareness, the COMPAS team might have been able to test different approaches and recreate the model while adjusting for bias. This would have then worked to reduce unfair incarceration of African Americans, rather than exacerbating it.

Any NLP model pre-trained naïvely on common crawl, Google News, or any other corpus, since Word2Vec

Large, pre-trained models form the base for most NLP tasks. Unless these base models are specially designed to avoid bias along a particular axis, they are certain to be imbued with the inherent prejudices of the corpora they are trained with—for the same reason that these models work at all. The results of this bias, along racial and gendered lines, have been shown on Word2Vec and GloVe models trained on Common Crawl and Google News respectively. While contextual models such as BERT are the current state-of-the-art (rather than Word2Vec and GloVe), there is no evidence the corpora these models are trained on are any less discriminatory.

Although the best model architectures for any NLP problem are imbued with discriminatory sentiment, the solution is not to abandon pre-trained models but rather to consider the particular domain in question, the problem statement, and the data in totality with the team. If an application is one where discriminatory prejudice by humans is known to play a significant part, developers should be aware that models are likely to perpetuate that discrimination.

Allegheny family screening tool: unfairly biased, but well-designed and mitigated

In this final example, we discuss a model built from unfairly discriminatory data, but the unwanted bias is mitigated in several ways. The Allegheny Family Screening Tool is a model designed to assist humans in deciding whether a child should be removed from their family because of abusive circumstances. The tool was designed openly and transparently with public forums and opportunities to find flaws and inequities in the software.

The unwanted bias in the model stems from a public dataset that reflects broader societal prejudices. Middle- and upper-class families have a higher ability to “hide” abuse by using private health providers. Referrals to Allegheny County occur over three times as often for African-American and biracial families than white families. Commentators like Virginia Eubanks and Ellen Broad have claimed that data issues like these can only be fixed if society is fixed, a task beyond any single engineer.

In production, the county combats inequities in its model by using it only as an advisory tool for frontline workers, and designs training programs so that frontline workers are aware of the failings of the advisory model when they make their decisions. With new developments in debiasing algorithms, Allegheny County has new opportunities to mitigate latent bias in the model.

The development of the Allegheny tool has much to teach engineers about the limits of algorithms to overcome latent discrimination in data and the societal discrimination that underlies that data. It provides engineers and designers with an example of a consultative model building which can mitigate the real-world impact of potential discriminatory bias in a model.

Avoiding and mitigating AI bias: key business awareness

Fortunately, there are some debiasing approaches and methods—many of which use the COMPAS dataset as a benchmark.

Improve diversity, mitigate diversity deficits

Maintaining diverse teams, both in terms of demographics and in terms of skillsets, is important for avoiding and mitigating unwanted AI bias. Despite continuous lip service paid to diversity by tech executives, women and people of color remain under-represented.

Various ML models perform poorer on statistical minorities within the AI industry itself, and the people to first notice these issues are users who are female and/or people of color. With more diversity in AI teams, issues around unwanted bias can be noticed and mitigated before releasing into production.

Be aware of proxies: removing protected class labels from a model may not work!

A common, naïve approach to removing bias related to protected classes (such as sex or race) from data is to delete the labels marking race or sex from the models. In many cases, this will not work, because the model can build up understandings of these protected classes from other labels, such as postal codes. The usual practice involves removing these labels as well, both to improve the results of the models in production but also due to legal requirements. The recent development of debiasing algorithms, which we will discuss below, represents a way to mitigate AI bias without removing labels.

Be aware of technical limitations

Even the best practices in product design and model building will not be enough to remove the risks of unwanted bias, particularly in cases of biased data. It is important to recognize the limitations of our data, models, and technical solutions to bias, both for awareness’ sake, and so that human methods of limiting bias in machine learning such as human-in-the-loop can be considered.

Data scientists have a growing number of technical awareness and debiasing tools available to them, which supplement a team’s capacity to avoid and mitigate AI bias. Currently, awareness tools are more sophisticated and cover a wide range of model choices and bias measures, while debiasing tools are nascent and can mitigate bias in models only in specific cases.

Awareness and debiasing tools for supervised learning algorithms

IBM has released a suite of awareness and debiasing tools for binary classifiers under the AI Fairness project. To detect AI bias and mitigate against it, all methods require a class label (e.g., race, sexual orientation). Against this class label, a range of metrics can be run (e.g., disparate impact and equal opportunity difference) that quantify the model’s bias toward particular members of the class. We include an explanation of these metrics at the bottom of the article.

Once bias is detected, the AI Fairness 360 library (AIF360) has 10 debiasing approaches (and counting) that can be applied to models ranging from simple classifiers to deep neural networks. Some are preprocessing algorithms, which aim to balance the data itself. Others are in-processing algorithms which penalize unwanted bias while building the model. Yet others apply postprocessing steps to balance favorable outcomes after a prediction. The particular best choice will depend on your problem.

AIF360 has a significant practical limitation in that the bias detection and mitigation algorithms are designed for binary classification problems, and need to be extended to multiclass and regression problems. Other libraries, such as Aequitas and LIME, have good metrics for some more complicated models—but they only detect bias. They aren’t capable of fixing it. But even just the knowledge that a model is biased before it goes into production is still very useful, as it should lead to testing alternative approaches before release.

General awareness tool: LIME

The Local Interpretable Model-agnostic Explanations (LIME) toolkit can be used to measure feature importance and explain the local behavior of most models—multiclass classification, regression, and deep learning applications included. The general idea is to fit a highly interpretable linear or tree-based model to the predictions of the model being tested for bias.

For instance, deep CNNs for image recognition are very powerful but not very interpretable. By training a linear model to emulate the behavior of the network, we can gain some insight into how it works. Optionally, human decision-makers can review the reasons behind the model’s decision in specific cases through LIME and make a final decision on top of that. This process in a medical context is demonstrated with the image below.

Debiasing NLP models

Earlier, we discussed the biases latent in most corpora used for training NLP models. If unwanted bias is likely to exist for a given problem, I recommend readily available debiased word embeddings. Judging from the interest from the academic community, it is likely that newer NLP models like BERT will have debiased word embeddings shortly.

Debiasing convolutional neural networks (CNNs)

Although LIME can explain the importance of individual features and provide local explanations of behavior on particular image inputs, LIME does not explain a CNN’s overall behavior or allow data scientists to search for unwanted bias.

In famous cases where unwanted CNN bias was found, members of the public (such as Joy Buolamwini) noticed instances of bias based on their membership of an underprivileged group. Hence the best approaches in mitigation combine technical and business approaches: Test often, and build diverse teams that can find unwanted AI bias through testing before production.

In this section, we focus on the European Union’s General Data Protection Regulation (GDPR). The GDPR is globally the de facto standard in data protection legislation. (But it’s not the only legislation—there’s also China’s Personal Information Security Specification, for example.) The scope and meaning of the GDPR are highly debatable, so we’re not offering legal advice in this article, by any means. Nevertheless, it’s said that it’s in the interests of organizations globally to comply, as the GDPR applies not only to European organizations but any organizations handling data belonging to European citizens or residents.

The GDPR is separated into binding articles and non-binding recitals. While the articles impose some burdens on engineers and organizations using personal data, the most stringent provisions for bias mitigation are under Recital 71, and not binding. Recital 71 is among the most likely future regulations as it has already been contemplated by legislators. Commentaries explore GDPR obligations in further detail.

We will zoom in on two key requirements and what they mean for model builders.

1. Prevention of discriminatory effects

The GDPR imposes requirements on the technical approaches to any modeling on personal data. Data scientists working with sensitive personal data will want to read the text of Article 9, which forbids many uses of particularly sensitive personal data (such as racial identifiers). More general requirements can be found in Recital 71:

[. . .] use appropriate mathematical or statistical procedures, [. . .] ensure that the risk of errors is minimised [. . .], and prevent discriminatory effects on the basis of racial or ethnic origin, political opinion, religion or beliefs, trade union membership, genetic or health status, or sexual orientation.

GDPR (emphasis mine)

Much of this recital is accepted as fundamental to a good model building: Reducing the risk of errors is the first principle. However, under this recital, data scientists are obliged not only to create accurate models but models which do not discriminate! As outlined above, this may not be possible in all cases. The key remains to be sensitive to the discriminatory effects which might arise from the question at hand and its domain, using business and technical resources to detect and mitigate unwanted bias in AI models.

2. The right to an explanation

Rights to “meaningful information about the logic involved” in automated decision-making can be found throughout GDPR articles 13-15… Recital 71 explicitly calls for “the right […] to obtain an explanation” (emphasis mine) of automated decisions. (However, the debate continues as to the extent of any binding right to an explanation.)

As we have discussed, some tools for providing explanations for model behavior do exist, but complex models (such as those involving computer vision or NLP) cannot be easily made explainable without losing accuracy. Debate continues as to what an explanation would look like. As a minimum best practice, for models likely to be in use into 2020, LIME or other interpretation methods should be developed and tested for production.

Ethics and AI: a worthy and necessary challenge

In this post, we have reviewed the problems of unwanted bias in our models, discussed some historical examples, provided some guidelines for businesses and tools for technologists, and discussed key regulations relating to unwanted bias.

As the intelligence of machine learning models surpasses human intelligence, they also surpass human understanding. But, as long as models are designed by humans and trained on data gathered by humans, they will inherit human prejudices.

Managing these human prejudices requires careful attention to data, using AI to help detect and combat unwanted bias when necessary, building sufficiently diverse teams, and having a shared sense of empathy for the users and targets of a given problem space. Ensuring that AI is fair is a fundamental challenge of automation. As the humans and engineers behind that automation, it is our ethical and legal obligation to ensure AI acts as a force for fairness.

Further reading on AI ethics and bias in machine learning

Books on AI bias

Machine learning resources

AI bias organizations

Debiasing conference papers and journal articles

Definitions of AI bias metrics

Disparate impact

Disparate impact is defined as “the ratio in the probability of favorable outcomes between the unprivileged and privileged groups.” For instance, if women are 70% as likely to receive a perfect credit rating as men, this represents a disparate impact. The disparate impact may be present both in the training data and in the model’s predictions: in these cases, it is important to look deeper into the underlying training data and decide if disparate impact is acceptable or should be mitigated.

Equal Opportunity Difference

Equal opportunity difference is defined (in the AI Fairness 360 article found above) as “the difference in true positive rates [recall] between unprivileged and privileged groups.” The famous example discussed in the paper of high equal opportunity difference is the COMPAS case. As discussed above, African-Americans were being erroneously assessed as high-risk at a higher rate than Caucasian offenders. This discrepancy constitutes an equal opportunity difference.

The Toptal Engineering Blog is a hub for in-depth development tutorials and new technology announcements created by professional software engineers in the Toptal network. You can read the original piece written by Michael McKenna here. Follow the Toptal Design Blog on Twitter and LinkedIn.

Read next:

You could get sued for embedding Instagram posts without permission

Corona coverage

Read our daily coverage on how the tech industry is responding to the coronavirus and subscribe to our weekly newsletter Coronavirus in Context.

For tips and tricks on working remotely, check out our Growth Quarters articles here or follow us on Twitter.



Source link

Hearthside sales weak due to COVID-19

0

NEW YORK — Despite heavy pantry loading of products made by the company’s packaged foods customers in the first quarter, Hearthside Food Solutions experienced softening sales during the period, according to Moody’s Investment Services.

An update on privately-held Hearthside’s business was included in an announcement by Moody’s that a proposed new term loan would be assigned a B2 credit rating.

In addition to stepped-up demand from pantry loading, Hearthside benefited during the first quarter from an overall boost in in-home consumption during the quarter.

Business in other Hearthside channels was depressed.

“Management attributed the weakness to a decline in fresh sandwich sales as well as a decline in functional bar sales, which are sold at airports and other on-the-go locations and were thus impacted by reduced travel and commuting due to the coronavirus pandemic,” Moody’s said.

For the full year, the ratings agency projected flat revenues for Hearthside.

The B2 rating was assigned to H-Food Holdings, LLC (Hearthside), which is seeking a senior secured first lien term loan due in 2025. Moody’s also affirmed a B3 corporate family rating and B2 rating for $225 million first lien secured revolving credit facility and $1.6 billion senior secured first lien term loan.

Hearthside also has $350 million in senior unsecured global notes with a Caa2 rating.

A B2 rating is considered speculative and subject to high credit risk. The Caa2 rating is considered speculative, of poor standing and a high credit risk.

Hearthside will use the $100 million loan to repay about $97 million in borrowings under the first lien credit facility.

“Hearthside used the revolver borrowings to help fund a $130 million growth capital expenditures program that will expand its facilities and allow Hearthside to accommodate newly awarded contracts from its customers,” Moody’s said. “The borrowings increase Hearthside’s already high leverage and cash interest.”

Explaining the rating, Moody’s said it believes Hearthside will reduce leverage through incremental EBITDA growth in 2020 and 2021.

Hearthside’s financial leverage, over 8 times EBITDA, has kept the company’s credit rating under pressure.

“The rating also reflects event risk, such as additional leveraged acquisitions and aggressive shareholder distributions, given the company’s financial sponsor ownership,” Moody’s said. “At the same time, the rating incorporates the company’s good position as a contract manufacturer and packager of food products. The company has longstanding relationships with leading US food companies and limited commodity exposure due to pass-through cost arrangements. This helps limit cash flow and earnings volatility. The company has good liquidity.”

Based in Downers Grove, Ill., Hearthside is a contract manufacturer and packer of packed food products in North America and, to a lesser extent, Europe.  The company produces a variety of nutrition bars, cookies, cereals, baked foods and snacks at 38 manufacturing facilities across the United States and Europe. The company’s customers include General Mills, Kellogg, Kraft Heinz, PepsiCo and Mondelez International. The company has about $3.1 billion in annual revenues and is owned by an investment group led by Charlesbank Capital Partners and Partners Group following an April 2018 leveraged buyout.

Source link

‘New Westralia’ members occupy WA museum, denounce Pope

Western Australia got a taste of revolution today when two men allegedly locked themselves inside the old courthouse in York, west of Perth and assumed ownership.

The men, members of a group named New Westralia, then raised the St George Cross flag of England.

A group of people allegedly took over York Courthouse west of Perth, in the name of New Westralia. (9News)

The alleged incursion took place at about 8.30am.

“This flag is replacing the Satanic mob that have got control of us at the moment,” one of the men said.

The heritage-listed courthouse now holds a museum.

“New Westralia are taking entry and possession of this court in York,” the alleged occupiers declared.

The group raised the St George Cross flag and denounced the Pope. (9News)

“The Bishop of Rome has no jurisdiction in this here realm of England.”

The group are accused of smashing their way inside, where they bowed their heads in prayer over a Bible.

On Facebook, they claimed they were under siege from police.

Officers negotiated with the group for an hour before moving in and carrying the occupiers out.

Three men and a woman faced court this afternoon, charged with trespass and criminal damage.

Source by [author_name]

Cartoon: Carlos on #BlackLivesMatter – The Mail & Guardian




Cartoon: Carlos on #BlackLivesMatter – The Mail & Guardian


















These are unprecedented times, and the role of media to tell and record the story of South Africa as it develops is more important than ever. But it comes at a cost. Advertisers are cancelling campaigns, and our live events have come to an abrupt halt. Our income has been slashed.

The Mail & Guardian is a proud news publisher with roots stretching back 35 years. We’ve survived thanks to the support of our readers, we will need you to help us get through this.

To help us ensure another 35 future years of fiercely independent journalism, please subscribe.

press releases

Loading latest Press Releases…


The best local and international journalism

handpicked and in your inbox every weekday



Source link

BBC senior executive promoted to director general

0

The BBC’s headquarters in London | Oli Scarff/Getty Images

Tim Davie will take over from Tony Hall in September.

LONDON — Tim Davie, one of the BBC’s most senior executives, will lead the British public broadcaster as its new director general.

The corporation announced Friday Davie has been promoted from chief executive of BBC Studios, the subsidiary firm that sells BBC programs overseas. Davie has spent most of his professional career at the broadcaster, which he joined in 2005 from Pepsi.

He will become the 17th director general, taking over from Tony Hall, who is stepping down this summer after seven years in the role.

Davie’s in-tray will include negotiating the future of the license fee, the BBC’s main source of income, with the U.K. government. The system will stay in place until at least 2027, but ministers are going to review funding from 2022 onwards.

He will also have to fight to attract younger audiences, amid intense competition with other online platforms, and deliver on the government’s request to represent all corners of the U.K. in its coverage.

David Clementi, outgoing chairman of the BBC Board, said in a statement that Davie “has an enthusiasm and energy for reform, while holding dear to the core mission of the BBC.”

“We know that the industry is undergoing unprecedented change and the organisation faces significant challenges as well as opportunities. I am confident that Tim is the right person to lead the BBC as it continues to reform and change,” the statement said.

Davie said the corporation must continue to evolve. “Looking forward, we will need to accelerate change so that we serve all our audiences in this fast-moving world. Much great work has been done, but we will continue to reform, make clear choices and stay relevant. I am very confident we can do this because of the amazing teams of people that work at the BBC,” he said in a statement.



Source by [author_name]