Tag: contends

  • 2020 Delhi riots: Sharukh Pathan contends delay in trial before HC

    2020 Delhi riots: Sharukh Pathan contends delay in trial before HC

    [ad_1]

    New Delhi: The Delhi High Court on Monday was informed by Shahrukh Pathan, who had aimed a pistol at a policeman during the 2020 northeast Delhi riots, that the conclusion of the trial has been delayed for a long time now and that for more than a year now, only two witnesses have been examined out of 40.

    A single-judge bench of Justice Dinesh Kumar Sharma was dealing with Pathan’s bail plea, which he moved last year in January in a case related to rioting and causing injuries to police personnel; charges against him have already been framed in this case.

    Simultaneously, he is facing charges in another case in connection with aiming a pistol.

    A trial court had rejected his bail plea in December 2021.

    Pathan’s counsel Advocate Khalid Akhtar submitted: “There is a huge delay in the conclusion of the trial. Only two witnesses have been examined so far out of about 40. Until now, only two witnesses have been examined. I have been attacked in jail too,” he said.

    Akhtar, while urging the court for an early hearing, submitted: “The bail application has been pending for 14 months now. I filed the bail application here in January 2022.”

    The judge then listed the matter for the next hearing on May 2 and directed both Pathan and Delhi Police to file brief written submissions.

    A single-judge bench of Amit Sharma on February 9 had asked Pathan to file an application before the trial court for an early hearing of his plea alleging that he was assaulted by jail officials.

    Justice Sharma, who was dealing with a similar petition moved by Pathan, had said that since a plea has already been moved before the trial court, it is only just if an application is filed before the concerned court.

    His counsel Akhtar had contended that the trial court, which listed the matter for the next hearing on February 28, has not passed any order or direction that relevant CCTV footage be preserved or produced.

    Akhtar had said: “There was no order to the effect that some adequate safety measures be provided to him.”

    To this, Justice Sharma had orally said that the prerequisite is that Pathan moves an application before the concerned court, and if it does not work out, the court will grant him the liberty.

    “You move an application for an early hearing before the trial court. If nothing happens, we will see. We will give you the liberty,” he had said.

    Pathan had withdrawn his plea post getting liberty from the HC to move to the trial court for an early hearing of his pending plea.

    Though Pathan is an accused in various cases registered during the riots, the petition was moved in the case of aiming a pistol at Head Constable Deepak Dahiya on February 24, 2020. Social media was abuzz with his pictures.

    The First Information Report (FIR) in this case was registered under various Sections of the Indian Penal Code and Section 27 of the Arms Act. In December 2021, the trial court framed charges against Pathan and other accused in the FIR.

    On January 30, a court discharged a man accused of selling a pistol to Pathan.

    “The case against accused Babu Wasim is essentially based on surmises and conjectures rather than actual material or evidence and there is no ground to presume that the accused committed an offence under Section 25 Arms Act. He is accordingly discharged for the said offence,” Additional Sessions Judge Amitabh Rawat had said.

    Pathan had disclosed that he had purchased a pistol and 20 rounds from Babu Wasim by paying Rs 35,000 in December 2019, the prosecution had said.

    [ad_2]
    #Delhi #riots #Sharukh #Pathan #contends #delay #trial

    ( With inputs from www.siasat.com )

  • Social media is a defective product, lawsuit contends

    Social media is a defective product, lawsuit contends

    [ad_1]

    It also could upstage members of Congress from both parties and President Joe Biden, who have called for regulation since former Facebook Product Manager Frances Haugen released documents revealing that Meta — Facebook and Instagram’s parent company — knew users of Instagram were suffering ill health effects, but have failed to act in the 15 months since.

    “Frances Haugen’s revelations suggest that Meta has long known about the negative effects Instagram has on our kids,” said Previn Warren, an attorney for Motley Rice and one of the leads on the case. “It’s similar to what we saw in the 1990s, when whistleblowers leaked evidence that tobacco companies knew nicotine was addictive.”

    Meta hasn’t responded to the lawsuit’s claims, but the company has added new tools to its social media sites to help users curate their feeds, and CEO Mark Zuckerberg has said the company is open to new regulation from Congress.

    The plaintiffs’ lawyers, led by Motley Rice, Seeger Weiss, and Lieff Cabraser Heimann & Bernstein, believe they can convince the judiciary to move first. They point to studies on the harms of heavy social media use, particularly for teens, and Haugen’s “smoking gun” documents.

    Still, applying product liability law to an algorithm is relatively new legal territory, though a growing number of lawsuits are putting it to the test. In traditional product liability jurisprudence, the chain of causality is usually straightforward: a ladder with a third rung that always breaks. But for an algorithm, it is more difficult to prove that it directly caused harm.

    Legal experts even debate whether an algorithm can be considered a product at all. Product liability laws have traditionally covered flaws in tangible items: a hair dryer or a car.

    Case law is far from settled, but an upcoming Supreme Court case could chip away at one of the defense’s arguments. A provision of the 1996 Communications Act known as Section 230 protects social media companies by restricting lawsuits against the firms about content users posted on their sites. The legal shield Section 230 provides could safeguard the companies from the product liability claim.

    The high court will hear oral arguments in the case of Gonzalez v. Google on Feb. 21. The justices will weigh whether or not Section 230 protects content recommendation algorithms. The case surrounds the death of Nohemi Gonzalez, who was killed by ISIS terrorists in Paris in 2015. The plaintiffs’ attorneys argue that Google’s algorithm showed ISIS recruitment videos to some users, contributing to their radicalization and violating the Anti-Terrorism Act.

    If the court agrees, it would limit the wide-ranging immunity tech companies have enjoyed and potentially remove a barrier in the product liability case.

    Congress and the courts

    Since Haugen’s revelations, which she expanded on in testimony before the Senate Commerce Committee, lawmakers of both parties have pushed bills to rein in the tech giants. Their efforts have focused on limiting the firms’ collection of data about both adults and minors, reducing the creation and proliferation of child pornography, and narrowing or removing protections afforded under Section 230.

    The two bills that have gained the most attention are the American Data Privacy and Protection Act, which would limit the data tech companies can collect about their users, and the Kids Online Safety Act, which seeks to restrict data collection on minors and create a duty to protect them from online harms.

    However, despite bipartisan support, Congress passed neither bill last year, amid concerns about federal preemption of state laws.

    Sen. Mark Warner (D-Va.), who has proposed separate legislation to reduce the tech firms’ Section 230 protections, said he plans to continue pushing: “We’ve done nothing as more and more watershed moments pile up.”

    Some lawmakers have lobbied the Supreme Court to rule for Gonzalez in the upcoming case, or to issue a narrow ruling that might chip away at the scope of Section 230. Among those filing amicus briefs were Sens. Ted Cruz (R-Texas) and Josh Hawley (R-Mo.), as well as the states of Texas and Tennessee. In 2022, lawmakers in several states introduced at least 100 bills aimed at curbing content on tech company platforms.

    Earlier this month, Biden penned an op-ed for The Wall Street Journal calling on Congress to pass laws that protect data privacy and hold social media companies accountable for the harmful content they spread, suggesting a broader reform. “Millions of young people are struggling with bullying, violence, trauma and mental health,” he wrote. “We must hold social-media companies accountable for the experiment they are running on our children for profit.”

    The product liability suit offers another path to that end. Lawyers on the case say that the sites’ content recommendation algorithms addict users, and that the companies know about the mental health impact. Under product liability law, the lawyers say, the algorithms’ makers have a duty to warn consumers when they know their products can cause harm.

    A plea for regulation

    The tech firms haven’t yet addressed the product liability claims. However, they have repeatedly argued that eliminating or watering down Section 230 will do more harm than good. They say it would force them to dramatically increase censorship of user posts.

    Still, since Haugen’s testimony, Meta has asked Congress to regulate it. In a note to employees he wrote after Haugen spoke to senators, CEO Mark Zuckerberg challenged her claims, but acknowledged public concerns.

    “We’re committed to doing the best work we can,” he wrote, “but at some level the right body to assess tradeoffs between social equities is our democratically elected Congress.”

    The firm backs some changes to Section 230, it says, “to make content moderation systems more transparent and to ensure that tech companies are held accountable for combating child exploitation, opioid abuse, and other types of illegal activity.”

    It has introduced 30 tools on Instagram that it says makes the platform safer, including an age verification system.

    According to Meta, teens under 16 are automatically given private accounts with limits on who can message them or tag them in posts. The company says minors are shown no alcohol or weight loss advertisements. And last summer, Meta launched a “Family Center,” which aims to help parents supervise their children’s social media accounts.

    “We don’t allow content that promotes suicide, self-harm or eating disorders, and of the content we remove or take action on, we identify over 99 percent of it before it’s reported to us. We’ll continue to work closely with experts, policymakers and parents on these important issues,” said Antigone Davis, global head of safety at Meta.

    TikTok has also tried to address disordered eating content on its platform. In 2021, the company started working with the National Eating Disorders Association to suss out harmful content. It now bans posts that promote unhealthy eating habits and behaviors. It also uses a system of public service announcement hashtags to highlight content that encourages healthy eating.

    The biggest challenge, a spokesperson for the company said, is that the language around disordered eating and its promotion is constantly changing and that content that may harm one person, may not harm another.

    Curating their feeds

    In the absence of strict regulation, advocates for people with eating disorders are using the tools the social media companies provide.

    They say the results are mixed and hard to quantify.

    Nia Patterson, a regular social media user who’s in recovery from an eating disorder and now works for Equip, a firm that offers treatment for eating disorders via telehealth, has blocked accounts and asked Instagram not to serve up certain ads.

    Patterson uses the platform to reach others with eating disorders and offer support.

    But teaching the platform to not serve her certain content took work and the occasional weight loss ad still slips through, Patterson said, adding that this kind of algorithm training can be hard for people who have just begun to recover from an eating disorder or are not yet in recovery: “The three seconds that you watch of a video? They pick up on it and feed you related content.”

    Part of the reason teens are so susceptible to social media’s temptations is that they are still developing. “When you think about teenagers, adolescents, their brain growth and development is not quite there yet,” said Allison Chase, regional clinical director at ERC Pathlight, an eating disorder clinic. “What you get is some really impressionable individuals.”

    Jamie Drago, a peer mentor at Equip, developed an eating disorder in high school, she said, after becoming obsessed with a college dance team’s Instagram feed.

    At the same time, she was seeing posts of influencers pushing three-day juice cleanses and smoothie bowls. She recalls experimenting with fruit diets and calorie restricting and then starting her own Instagram food account to catalog her own insubstantial meals.

    When she thinks back on her experience and her social media habits, she recognizes that the problem she encountered isn’t because there’s anything inherently wrong with social media. It’s the way content recommendation algorithms repeatedly served her content that caused her to compare herself to others.

    “I didn’t accidentally stumble upon really problematic things on MySpace,” she said, referencing a social media site where she also had an account. Instagram’s algorithm, she said, was feeding her problematic content. “Even now, I stumble upon content that would be really triggering for me if I was still in my eating disorder.”

    [ad_2]
    #Social #media #defective #product #lawsuit #contends
    ( With inputs from : www.politico.com )