Wednesday, April 29, 2026

Sonam Kapoor gets birthday surprise from ‘the best team in the world’ | VIDEO

Image Source : INSTAGRAM/SONAM KAPOOR

Sonam Kapoor gets birthday surprise from ‘the best team in the world’ | VIDEO

Bollywood actress Sonam Kapoor celebrated her 35th birthday on Tuesday. The actress was flooded with wishes and heartfelt greeting on social media. From fans to celebrities, everyone wished the actress on her special day. However, it was the surprise birthday gift from her team that melted her heart. Sonam’s team recreated the actress’s film’s songs and made a video to wish her. In the clip, they are seen dancing to all her popular songs. Sharing the video, Sonam wrote, “My team made me an awesome birthday video! To the best team in the world.. I miss you guys so much.. thanks for making the best video ever. I can’t wait to see all of you and hug you tight. Hopefully soon! Love you guys so much.”

Reacting to the video, Sonam’s sister Rhea Kapoor commented, “Means this video should play on loop constantly it belongs in the hall of fame @vaishnavpraveen @kareenakapoorkhan has demanded a solo.” The actress’s mother Sunita Kapoor also shared the video and lauded the efforts of the team. She said, “What a fabulous effort made by the best team ever. You guys are too good.” father Anil Kapoor said, “It’s got repeat value Sonam … kudos to the.”

On Sonam’s birthday, Anil Kapoor shared a beautiful photo of the actress and showered her with love. He captioned the photo: “To a daughter like no other, the perfect partner to @anandahuja, a star on screen and an icon with an unimitable style. She’s my confidant, my joy, my pride, the most generous hearted soul I know, (the only person I am shit scared of) & now a bona fide master chef! Happy Birthday, @sonamkapoor! I’m so happy that you’re here with all of us today! ‬ ‪Love You, Always!”

Also, Sonam’s sister Rhea Kapoor had the cutest birthday wish for her as she called her the ‘Best Friend’. She wrote, “Happy birthday to my sister. There are countless things I wouldn’t have (a career) or be (a stylist/producer) without you but I think the most valuable gifts you have given me are your belief in me and that lurking idealism that comes so easily to you. The way you trust me has taught me that that kind of faith is possible and worth striving for and your idealism has become part of my conscience pushing me to be better every day no matter how cynical I want to get.”

“They say never judge a book by its cover and with you it’s a conflict- I’m styling that cover and most people seem to really like it. But what’s under is delightful beyond what people can comprehend until they get to know you. You deserve everything you hope for and have worked for. I love you the most. Best friends forever.”

On the work front, Sonam Kapoor was last seen in 2019 film The Zoya Factor opposite Dulquer Salmaan. She hasn’t announced her next film yet.

Fight against Coronavirus: Full coverage



Source link

Mystery of Swedish prime minister’s murder still unsolved as investigation closes

A decades-long criminal investigation compared in scale to the probe into the assassination of President John F. Kennedy and the Lockerbie bombing has ended with the case unsolved as investigators said they were calling off the hunt for answers to the 1986 killing of then prime minister, Olof Palme.

Swedish prosecutors attracted worldwide attention when they announced Wednesday they would reveal the results of their investigation into the killing, which has fascinated Sweden for 34 years and become the subject of florid conspiracy theories.

But those looking for answers were disappointed: the prosecutor’s office said it was discontinuing the investigation because their main suspect, Stig Engström, killed himself in 2000.

Palme was aged 59 when he was shot in the back on a busy Stockholm street while returning from a movie theater. Supporters credit him with forging the image of modern-day Sweden, still vaunted globally today.

Some 90,000 people have worked on the murder case. It has generating countless conspiracy theories, with alleged suspects ranging from Kurdish militias and Indian arms dealers to South Africans angry about Palme’s stance against apartheid.

Stig Engstrom gestures outside the Skandia office in Stockholm in April 1986.TT NEWS AGENCY / Reuters

“Because that person is dead, I cannot bring charges against him but decided to close the preliminary investigation,” prosecutor Krister Petersson said in a briefing.

Engström had become well known as a suspect in the case and was nicknamed the “Skandiamannen” because he worked at the nearby Skandia insurance company. He was one of the first people at the scene and claimed that he had attempted to resuscitate Palme, whose policies he was known to strongly oppose.

“Stig Engström wasn’t a focal point on the investigation but we’ve looked at his background, and what we can see there is that he was used to using weapons, he had been employed by the army and was a member of a shooting club,” the prosecutor said. His movements on the night of the murder were “consistent with how we believe the perpetrator has acted that evening” and he was known to have financial and alcohol problems, he said.

Palme’s wife, Lisbet, was injured in the attack and later identified the shooter as Christer Pettersson, an alcoholic and drug addict, who was convicted of her husband’s murder and later died in 2004.

The sentence was later overturned after police failed to produce any technical evidence against him, leaving the murder an unsolved mystery.

The Associated Press contributed to this report.



Source link

HBO Max Pulls ‘Gone With the Wind,’ Citing Racist Depictions

0

HBO Max has removed from its catalog “Gone With the Wind,” the 1939 movie long considered a triumph of American cinema but one that romanticizes the Civil War-era South while glossing over its racial sins.

The streaming service pledged to eventually bring the film back “with a discussion of its historical context” while denouncing its racial missteps, a spokesperson said in a statement on Tuesday.

Set on a plantation and in Atlanta, the film won multiple Academy Awards, including best picture, and remains among the most celebrated movies in cinematic history. But its rose-tinted depiction of the antebellum South and its blindness to the horrors of slavery have long been criticized, and that scrutiny was renewed this week as protests over police brutality and the death of George Floyd continued to pull the United States into a wide-ranging conversation about race.

“‘Gone With the Wind’ is a product of its time and depicts some of the ethnic and racial prejudices that have, unfortunately, been commonplace in American society,” an HBO Max spokesperson said in a statement. “These racist depictions were wrong then and are wrong today, and we felt that to keep this title up without an explanation and a denouncement of those depictions would be irresponsible.”

HBO Max, owned by AT&T, pulled the film on Tuesday, one day after John Ridley, the screenwriter of “12 Years a Slave,” wrote an op-ed in The Los Angeles Times calling for its removal. Mr. Ridley said he understood that films were snapshots of their moment in history, but that “Gone With the Wind” was still used to “give cover to those who falsely claim that clinging to the iconography of the plantation era is a matter of ‘heritage, not hate.’”

“It is a film that, when it is not ignoring the horrors of slavery, pauses only to perpetuate some of the most painful stereotypes of people of color,” he wrote.

By several measures, the film was one of the most successful in American history. It received eight competitive Academy Awards and remains the highest-grossing film ever when adjusting for inflation. In 1998, it placed sixth on the American Film Institute’s list of greatest films of all time.

There was little criticism of the film when it was released, though in 1939 an editorial board member of The Daily Worker, a newspaper published by the Communist Party USA, called it “an insidious glorification of the slave market” and the Ku Klux Klan.

But the world in which it is viewed has changed, and with each decade discomfort has grown as people revisit its racial themes and what was omitted. In 2017, the Orpheum theater in Memphis said it would stop showing the film, as it had done each year for 34 years, after receiving complaints from patrons and other commenters. The president of the theater said it could not show a film “that is insensitive to a large segment of its local population.”

Based on a 1936 book by Margaret Mitchell, the film chronicles the love affair of Scarlett O’Hara, the daughter of a plantation owner, and Rhett Butler, a charming gambler. Critics have long said that the slaves are depicted as well-treated, content and loyal to their masters, a trope that rewrites the reality of how enslaved people were forced to live. Hattie McDaniel became the first African-American to win an Oscar by playing Mammy, an affable slave close to Scarlett O’Hara.

The nationwide protests of recent weeks have caused other entertainment companies to reconsider how their content is viewed in the current climate. The Paramount Network said on Tuesday that it had removed “Cops,” the long-running reality show that glorified police officers, from its schedule before its 33rd season.

There have also been similar moves in Britain. On Monday, the BBC removed episodes of the comedy series “Little Britain” — which featured one character in blackface — from its streaming service.

“Times have changed since ‘Little Britain’ first aired so it is not currently available on BBC iPlayer,” a BBC spokesperson said. The show had already been removed from Netflix and was also taken off the BritBox streaming service.

“Little Britain,” which was shown in the early 2000s, was created by David Walliams and Matt Lucas. Mr. Lucas, who was recently named the new host of “The Great British Baking Show,” has said in interviews that he would not make “Little Britain” today.

Source link

Jim Gaffigan Rips ‘Horrible Heartless Fool’ Trump For His Attack On Elderly Protester

0

Comic Jim Gaffigan called out President Donald Trump on Tuesday for his unhinged attack on an elderly protester who was seriously injured by police in Buffalo last week.

In a scene caught on video, Martin Gugino, 75, approached police officers near a protest and appeared to wave his cellphone. He was then shoved by police, fell to the ground and began to bleed from his head as multiple officers stepped around him. 

But Trump claimed ― without evidence ― that Gugino “could be an ANTIFA provocateur” trying to jam police communications equipment who “fell harder than he was pushed.” 

Gaffigan responded by slamming Trump as a “horrible heartless fool.” Then, no doubt anticipating critics who might tell him to “stick to comedy,” he preemptively used the phrase on Trump himself: 

Gaffigan also had a comment for fans who might not be happy with his turn to the “political”:

Gugino is recovering in a Buffalo hospital, where he is in serious but stable condition. The New York Times, citing a friend, reported that Gugino is unable to move his head without pain and expected to remain in the hospital for weeks.

Gugino is reportedly a longtime activist with the nonviolent Catholic Worker Movement, which aids the poor and disenfranchised. According to the group’s website, members “protest injustice, war, racism and violence of all forms.” 



Source by [author_name]

Ethical AI and the importance of guidelines for algorithms — explained

0

In October, Amazon had to discontinue an artificial intelligence–powered recruiting tool after it discovered the system was biased against female applicants. In 2016, a ProPublica investigation revealed a recidivism assessment tool that used machine learning was biased against black defendants. More recently, the US Department of Housing and Urban Development sued Facebook because its ad-serving algorithms enabled advertisers to discriminate based on characteristics like gender and race. And Google refrained from renewing its AI contract with the Department of Defense after employees raised ethical concerns.

Those are just a few of the many ethical controversies surrounding artificial intelligence algorithms in the past few years. There’s a six-decade history behind the AI research. But recent advances in machine learning and neural networks have pushed artificial intelligence into sensitive domains such as hiring, criminal justice and health care.

In tandem with advances in artificial intelligence, there’s growing interest in establishing criteria and standards to weigh the robustness and trustworthiness of the AI algorithms that are helping or replacing humans in making important and critical decisions.

With the field being nascent, there’s little consensus over the definition of ethical and trustworthy AI, and the topic has become the focus of many organizations, tech companies and government institutions.

In a recently published document titled “Ethics Guidelines for Trustworthy AI,” the European Commission has laid out seven essential requirements for developing ethical and trustworthy artificial intelligence. While we still have a lot to learn as AI takes a more prominent role in our daily lives, EC’s guidelines, unpacked below, provide a nice roundup of the kind of issues the AI industry faces today.

Human agency and oversight

“AI systems should both act as enablers to a democratic, flourishing and equitable society by supporting the user’s agency and foster fundamental rights, and allow for human oversight,” the EC document states.

Human agency means that users should have a choice not to become subject to an automated decision “when this produces legal effects on users or similarly significantly affects them,” according to the guidelines.

AI systems can invisibly threaten the autonomy of humans who interact with them by influencing their behavior. One of the best-known examples in this regard is Facebook’s Cambridge Analytica scandal, in which a research firm used the social media giant’s advertising platform to send personalized content to millions of users with the aim of affecting their vote in the 2016 U.S. presidential elections.

The challenge of this requirement is that we’re already interacting with hundreds of AI systems everyday, including the content in our social media feeds, when we view trends in Twitter, when we Google a term, when we search for videos on YouTube, and more.

The companies that run these systems provide very few controls over the AI algorithms. In some cases, such as Google’s search engine, companies explicitly refrain from publishing the inner-workings of their AI algorithms to prevent manipulation and gaming. Meanwhile, various studies have shown that search results can have a dramatic influence on the behavior of users.

Human oversight means that no AI system should be able to perform its functions without some level of control by humans. This means that humans should either be directly involved in the decision-making process or have the option to review and override decisions made by an AI model.

In 2016, Facebook had to shut down the AI that ran its “Trending Topics” section because it pushed out false stories and obscene material. It then returned humans in the loop to review and validate the content the module was specifying as trending topics.

Technical robustness and safety

The EC experts state that AI systems must “reliably behave as intended while minimizing unintentional and unexpected harm, and preventing unacceptable harm” to humans and their environment.

One of the greatest concerns of current artificial intelligence technologies is the threat of adversarial examples. Adversarial examples manipulate the behavior of AI systems by making small changes to their input data that are mostly invisible to humans. This happens mainly because AI algorithms work in ways that are fundamentally different from the human brain.

Adversarial examples can happen by accident, such as an AI system that mistakes sand dunes for nudes. But they can also be weaponized into harmful adversarial attacks against critical AI systems. For instance, a malicious actor can change the coloring and appearance of a stop sign in a way that will go unnoticed to a human but will cause a self-driving car to ignore it and cause a safety threat.

Adversarial attacks are especially a concern with deep learning, a popular blend of AI that develops its behavior by examining thousands and millions of examples.

There are already been several efforts to build robust AI systems that are resilient to adversarial attacks. AutoZOOM, a method developed by researchers at MIT-IBM Watson AI Lab, helps detect adversarial vulnerabilities in AI systems.

The EC document also recommends that AI systems should be able to fallback from machine learning to rule-based systems or ask for a human to intervene.

Since machine learning models are based on statistics, it should be clear how accurate a systems is. “When occasional inaccurate predictions cannot be avoided, it is important that the system can indicate how likely these errors are,” the EC’s ethical guidelines state. This means that the end user should know about the confidence level and the general reliability of the AI system they’re using.

Privacy and data governance

“AI systems must guarantee privacy and data protection throughout a system’s entire lifecycle. This includes the information initially provided by the user, as well as the information generated about the user over the course of their interaction with the system,” according to the EC document.

Machine learning systems are data-hungry. The more quality data they have, the more accurate they become. That’s why companies have a tendency to collect more and more data from their users. Companies like Facebook and Google have built economic empires by building and monetizing comprehensive digital profiles of their users. The use this data to train their AI models to provide personalized content and ads to their users and keep them glued to their apps to maximize their profit.

But how responsible are these companies in maintaining the security and privacy of this data? Not very much. They’re also not very explicit about the amount of data they collect and ways they use it.

In recent years, general awareness about privacy and new rules such as the European Union’s General Data Protection Regulation (GDPR) and California’s Consumer Privacy Act (CCPA) are forcing organizations to be more transparent about their data collection and processing practices. In the past year, many companies have offered users the option to download their data or to ask the company to delete it from its servers.

However, more needs to be done. Many companies share sensitive user information with their employees or third-party contractors to label data and train their AI algorithms. In many cases, users don’t know that human operators review their information and they falsely believe that only algorithms process their data.

Very recently, Bloomberg revealed that thousands of Amazon employees across the world access the voice recordings of the users of its Echo smart speakers to help improve the company’s AI-powered digital assistant Alexa. The idea does not sit well with the users, who expect to enjoy privacy in their homes.

Transparency

The European Commission experts define AI transparency in three components: traceability, explainability and communication.

AI systems based on machine learning and deep learning are highly complex. They develop their behavior based on correlations and patterns found in thousands and millions of training examples. Often, the creators of these algorithms don’t know the logical steps behind the decisions their AI models make. This makes it very hard to find the reasons behind the errors these algorithms make.

EC specifically recommends that developers of AI systems document the development process, the data they use to train their algorithms, and explain their automated decisions in ways that are understandable to humans.

Explainable AI has become the focus of several initiatives by the private and public sector. This includes a widespread effort by the Defense Advanced Research Projects Agency (DARPA) to create AI models are open to investigation and methods that can explain AI decisions.

Another important point raised in the EC document is communication. “AI systems should not represent themselves as humans to users; humans have the right to be informed that they are interacting with an AI system,” the document reads.

Last year, Google introduced Duplex, an AI service that could place calls on behalf users and make restaurant and salon reservations. Controversy ensued because the assistant refrained from presenting itself as an AI agent and duped its interlocutors into thinking they were speaking to a real human. The company later updated the service to present itself as Google Assistant.

Diversity, non-discrimination, and fairness

Algorithmic bias is one of the well-known controversies of contemporary AI technology. For a long time, we believed that AI would not make subjective decisions based on bias. But machine learning algorithms develop their behavior from their training data, and they reflect and amplify any bias contained in those data sets.

There have been numerous examples of algorithmic bias rearing its ugly head, such as the examples listed at the beginning of this article. Other cases include a study that showed popular AI-based facial analysis services being more accurate on men with light skin and making more errors on women with dark skin.

To prevent unfair bias against certain groups, EC’s guidelines recommend that AI developers make sure their AI systems’ data sets are inclusive.

The problem is, AI models often train on data that is publicly available, and this data often contains hidden biases that already exist in the society.

For instance, a group of researchers at Boston University discovered that word embedding algorithms (AI models used in tasks such as machine translation and online text search) trained on online articles had developed hidden biases, such as associating programming with men and homemaker with women. Likewise, if a company trains its AI-based hiring tools with the profiles of its current employees, it might be unintentionally pushing its AI toward replicating the hidden biases and preferences of its current recruiters.

To solve hidden biases, EC recommends for companies that develop AI systems hire people from diverse backgrounds, cultures and disciplines.

One consideration to note however is that fairness and discrimination often depends on the domain. For instance, in hiring, organizations must make sure that their AI systems don’t make decisions. But in another field like health care, parameters like gender and ethnicity must be factored in when diagnosing patients.

Societal and environmental well-being

“[The] broader society, other sentient beings and the environment should be also considered as stakeholders throughout the AI system’s life cycle,” EC’s guidelines state.

The social aspect of AI has been deeply studied. A notable example are social media companies, which use AI to study the behavior of their users and provide them with personalized content. This makes social media applications addictive and profitable, but also causes a negative impact on users, making them less social, less happy and less tolerant toward opposing views and opinions.

Some companies have started to acknowledge this and correct the situation. In 2018, Facebook declared that it would be making changes to its News Feed algorithm and provide users with more posts from friends and family and less from brands and publishers. The move was aimed at making the experience more social.

The environmental impact of AI is less discussed, but is equally important. Training and running AI systems in the cloud consumes a lot of electricity and leaves a huge carbon footprint. This is a problem that will grow worse as more and more companies use AI algorithms in their applications.

One of the solutions is to use lightweight edge AI solutions that require very little power and  run on renewable energy. Another solution is to use AI itself to help improve the environment. For instance, machine learning algorithms can help manage traffic and public transport to reduce congestion and carbon emissions.

Accountability

Finally, EC calls for mechanisms “to ensure responsibility and accountability for AI systems and their outcomes, both before and after their development, deployment and use.” Basically, this means there should be legal safeguards to make sure companies keep their AI systems conformant with ethical principles.

U.S. lawmakers recently introduced the Algorithmic Accountability Act which, if passed, will required companies to have their AI algorithms evaluated by the Federal Trade Commission for known problems such as algorithmic bias as well as privacy and security concerns.

Other countries, including the UK, France and Australia have passed similar legislation to hold tech companies to account for the behavior of their AI models.

In most cases, ethical guidelines are not in line with the business model and interests of tech companies. That’s why there should be oversight and accountability. “When unjust adverse impact occurs, accessible mechanisms should be foreseen that ensure adequate redress. Knowing that redress is possible when things go wrong is key to ensure trust,” the EC document states.

This article was originally published by Ben Dickson on TechTalks, a publication that examines trends in technology, how they affect the way we live and do business, and the problems they solve. But we also discuss the evil side of technology, the darker implications of new tech and what we need to look out for. You can read the original article here. 

Read next:

Bitcoin has returned more profit than every top-tier tech ETF this year

Corona coverage

Read our daily coverage on how the tech industry is responding to the coronavirus and subscribe to our weekly newsletter Coronavirus in Context.

For tips and tricks on working remotely, check out our Growth Quarters articles here or follow us on Twitter.



Source link

Farewell to Gummy Bear Jars: Tech Offices Get a Virus Safety Makeover

0

“There’ll be a sign outside that room that says: ‘Hey, everybody, this meeting room now has a capacity of no more than four people. Please respect that,’” she said. “That will be part of the new normal.”

Salesforce will also use scheduling software to limit the number of people working at each office. It will not be an entirely automated process.

Executives said they would give a scheduling priority to employees who needed to go in because, say, they had to work on a specific project or because cramped family quarters made working from home difficult. Another factor: federal guidelines recommending that employers encourage employees to avoid crowded mass transit.

“Proximity to the office probably will be important, the ability to walk, ride bikes, take a taxi, drive your car when typically you would just get on the train,” said Brent Hyder, Salesforce’s chief people officer. He added that employees who lived closer to one of the suburban offices may decide to work there instead.

The biggest workplace change may be cultural. Until there is a coronavirus vaccine, or at least better medical treatments, Salesforce employees will find their formerly fun-loving office life more managed by rules and tech tools.

In other words, they may get a taste of the kind of top-down infrastructure that is more common for retail and warehouse workers — with one huge difference: If Salesforce employees would rather not fill out daily coronavirus-symptom surveys, or don’t like the new office rules, they can keep working from home.

Employees will still want to go into the office, Ms. Pinkham said, only less frequently and for more specific reasons. To adapt, the company plans to schedule certain teams for the same shifts so they can see their colleagues and whiteboard ideas together, she said, albeit while wearing masks in more sparsely populated conference rooms.

Source link

2020 Is the Summer of the Road Trip. Unless You’re Black.

0

But Ms. Parker, 32, said that she can’t imagine just being able to pack up and go without a plan, like some white families might be able to do.

So for the last six months, she has been meticulously planning their journey. She knows which towns her family will stop in, which they’ll drive straight through, and which they’ll avoid entirely. She also knows which stretches of the road her children won’t be allowed to drink juice or water on, to avoid bathroom breaks in towns where the family could encounter racism or violence based on their race.

“We try not to stop in places that are desolate and we try to only stop in cities for gas,” she said. “If we have to stop for gas in a rural area, we use a debit card so we don’t have to go into the gas station store. If we are going to stay somewhere overnight, we look at the demographics to make sure we aren’t going to a place where we would be the only black people or where we would be targeted, especially at night.”

Ms. Parker grew up road tripping with family between New York and North Carolina, and her parents took similar precautions. She and her husband have also considered getting a dashboard camera, so that if they are stopped by police and things turn deadly there is some record of it.

In a way, Facebook groups for black travelers and group chats have become the 21st-century version of the “Green Book.” People talk about where they’ve been and follow in each other’s footsteps, sharing where they were treated well and where they felt uncomfortable or unsafe. Many stay in the same hotels, eat at the same restaurants or skip the same towns.

“We go where our friends and family have gone because we know that it’s safe,” said Dianelle Rivers-Mitchell, founder of Black Girls Travel Too, a group tour company for black women. “During this moment, with the protests as a backdrop, and as our community deals with how we were harder hit by coronavirus and we risk facing even more discrimination based on that, I just don’t see road-tripping being it for us.”

The so-called sundown towns — where black people were effectively banned after dark and where those who stayed too late were attacked by white mobs — no longer exist, but, for some black drivers, the fear of getting lost or stuck in a town where being black could lead to violence is a real concern that affects how a road trip is planned.

Source link

A Black Running Mate for Biden? More Democrats Are Making the Case

0

Others speak of the need to energize young voters of color who were uninspired by the 2016 Democratic presidential ticket, warning that summer protests in the streets are not guaranteed to translate into votes in November. And increasingly, many are arguing that for a presidential candidate who values experience in a running mate, personal familiarity with navigating the most searing issues confronting the nation should be a relevant qualification.

“Just like in ’08 — when President Obama selected someone that would help him govern, someone that could hit the ground running on recovery efforts in ’09 — when Joe Biden is elected in November, his running mate, the next vice president, would hit the ground running to address the crisis we have in our nation,” said Clay Middleton, a member of the Democratic National Committee and a well-known South Carolina strategist. “Of the plight of African-Americans, and law enforcement, police reform, a plethora of issues.”

Among those on the private call last month with Mr. Biden were the Democratic strategists Donna Brazile, Leah D. Daughtry, Minyon Moore and Karen Finney; the lawyer and media personality Star Jones; Roslyn M. Brock, the chairman emeritus of the national board of directors for the N.A.A.C.P.; and a lengthy list of activists in civil rights, labor and other issues, according to a readout intended for women who had signed the original petition. Mr. Biden’s campaign manager, Jennifer O’Malley Dillon, and two senior advisers, Anita Dunn and Symone D. Sanders, were also listed as participants, as was Representative Lisa Blunt Rochester, a Delaware Democrat and a member of Mr. Biden’s vice-presidential search committee. A Biden spokesman declined to comment.

“In the moment of our deepest racial division and crisis, really being able to have a ticket that is as reflective of the future and diversity of America as what we’re seeing happen in the streets right now — that, that is the opportunity,” said LaTosha Brown, co-founder of the Black Voters Matter Fund. “If there was a time in America we needed the leadership of a black woman, it is now.”

Ms. Brown, who was also listed as a participant in the call with Mr. Biden, declined to comment on the conversation, but said of the campaign, “I think there’s an openness to explore.”

Mr. Biden, 77, has been clear for months about some of his criteria for a running mate. He wants to choose someone with whom he is “simpatico” on major issues and strategy, even if they disagree on tactics. His vice president must be prepared on “Day 1,” he has said, to assume the presidency if need be. He wants to have open conversations and a strong level of trust with his running mate, he has said, just as he and Mr. Obama did.

He has also suggested he wants someone who will balance the ticket and who “has capacities in areas that I do not,” he said at a fund-raiser last month.

Source link

The ‘Invisible’ Garden of Scent

0

In places with gentle winters, Zones 7 and warmer, Mr. Druse said, “true jasmines and their impostors would be obvious candidates.” Possibilities include winter jasmine (Jasminum polyanthum), star jasmine (Trachelospermum jasminoides, in Zone 8) and Carolina Jessamine (Gelsemium sempervirens).

Many gardeners grow culinary herbs, some of which — the mints and rosemary, for instance — offer the extra delight of scent when brushed against. A group of pots positioned within reach, somewhere you pass many times a day, is an ideal way to incorporate such touch-me plants, even where there is no garden space.

Mr. Druse makes room, front and center, for some herbal-scented plants aren’t intended for the kitchen — like patchouli, anise hyssop (Agastache) and bee balm (Monarda).

The pelargoniums, or scented geraniums, were his gateway to fragrance. “Scented geraniums helped get me hooked on gardening as a teenager,” he said. As with many of his favorites, their leaves have to be rubbed to release the aromatic oils, which mimic sharp lemon, rose, peppermint, nutmeg and even coconut.

Besides being the best match for native pollinators and other beneficial insects, many native plants offer scent for the gardener to enjoy. A few Mr. Druse suggests considering: the scented foliage of mountain mint (Pycnanthemum); prairie dropseed grass (Sporobolus heterolepis), with late-summer and fall flowers that smell like popcorn or cilantro; and wintergreen (Gaultheria procumbens), whose foliage and fruits bear the scent.

The flowers of perennial black cohosh (Actaea racemosa) are honey-scented; milkweed’s are “thick and syrupy,” he said.

Some of his favorite native shrubs include that Calycanthus of his guessing game; Virginia sweetspire (Itea virginica), which smells like honey; fringetree (Chionanthus virginicus), with a scent of honey and vanilla; various deciduous azaleas (Rhododendron species); and moisture-loving summersweet (Clethra alnifolia), like clove with vanilla.

Source link

The ‘Invisible’ Garden of Scent

0

In places with gentle winters, Zones 7 and warmer, Mr. Druse said, “true jasmines and their impostors would be obvious candidates.” Possibilities include winter jasmine (Jasminum polyanthum), star jasmine (Trachelospermum jasminoides, in Zone 8) and Carolina Jessamine (Gelsemium sempervirens).

Many gardeners grow culinary herbs, some of which — the mints and rosemary, for instance — offer the extra delight of scent when brushed against. A group of pots positioned within reach, somewhere you pass many times a day, is an ideal way to incorporate such touch-me plants, even where there is no garden space.

Mr. Druse makes room, front and center, for some herbal-scented plants aren’t intended for the kitchen — like patchouli, anise hyssop (Agastache) and bee balm (Monarda).

The pelargoniums, or scented geraniums, were his gateway to fragrance. “Scented geraniums helped get me hooked on gardening as a teenager,” he said. As with many of his favorites, their leaves have to be rubbed to release the aromatic oils, which mimic sharp lemon, rose, peppermint, nutmeg and even coconut.

Besides being the best match for native pollinators and other beneficial insects, many native plants offer scent for the gardener to enjoy. A few Mr. Druse suggests considering: the scented foliage of mountain mint (Pycnanthemum); prairie dropseed grass (Sporobolus heterolepis), with late-summer and fall flowers that smell like popcorn or cilantro; and wintergreen (Gaultheria procumbens), whose foliage and fruits bear the scent.

The flowers of perennial black cohosh (Actaea racemosa) are honey-scented; milkweed’s are “thick and syrupy,” he said.

Some of his favorite native shrubs include that Calycanthus of his guessing game; Virginia sweetspire (Itea virginica), which smells like honey; fringetree (Chionanthus virginicus), with a scent of honey and vanilla; various deciduous azaleas (Rhododendron species); and moisture-loving summersweet (Clethra alnifolia), like clove with vanilla.

Source link