Plus, a U.S. university cancels classes due to ransomware and Twitter tests the “soft block”
In an update on its Child Safety page, Apple announced last week that they are pausing plans to implement a feature that would scan iPads, iPhones, and iCloud photos for child sexual abuse materials (CSAM). “Based on feedback from customers, advocacy groups, researchers, and others, we have decided to take additional time over the coming months to collect input and make improvements before releasing these critically important child safety features,” the update reads. “There were very valid concerns over the plan Apple was about to deploy. Nobody challenges the goal – we all are in favor of eradicating CSAM, but at the same time there has to be the utmost respect for the privacy of users,” commented Avast’s Luis Corrons. According to Wired, when Apple announced the new feature in August, the backlash from privacy advocates, cryptographers, and even Edward Snowden was “near-instantaneous.” “Technology misuse is often intertwined with other forms of abuse survivors are facing in their daily life,” wrote the National Network to End Domestic Violence (NNEDV) regarding the prevalence of technology-facilitated abuse they discovered in a survey of attendees at the 2018 Conference on Crimes Against Women. The most common reported tech abuse was “unwanted or abusive text messages” (53%), followed by “intimidation and threats via technology (39%), but there were also reports of tracking and spying. Anybody who believes they are at risk of suffering tech abuse should follow the seven easy steps to prevent stalkerware. Howard University, based in Washington, D.C., reported to the university community this week that it had been hit by a ransomware attack, resulting in the canceling of classes in order to give the IT team, authorities, and the FBI more time to address and investigate the issue. Campus Wi-Fi was also taken down. The university said it would update the community on the issue each day at 2pm. So far, the university claims, there is no evidence that any personal information has been exfiltrated. For more, see CNET. Github AI tool Copilot has one function: to predict the next character being typed; and while this function seems destined to replace the need for human programmers, coders like Avast’s Janine Luk find the AI useful. Luk told the BBC she uses Copilot to handle the more boring parts of coding, the tedious regular expressions. She said Copilot provides, “relatively good solutions, even though sometimes it requires some tweaking.” One risk, Luk says, is that it “can potentially violate open source licenses because it can cite something from the training set.” Testing a new “soft block” privacy feature, Twitter is allowing users to remove followers without blocking them. “We’re making it easier to be the curator of your own followers list,” Twitter Support announced this week, tweeting the instructions on how to stop selected followers from receiving your tweets automatically in their feed. The removed follower will still be able to view your public tweets through manual search. It is a softer approach than blocking someone, which stops them completely from viewing your tweets and direct messaging you. For more, see The Verge. Earlier this week, we wrapped up this year's What Does the Internet Know About Me? series, which has explored privacy policies and data collection practices of the digital products that many of us use in our daily lives. Join us to look back on the series and read through some of its highlights.Support for victims of tech abuse
Howard University cancels classes due to ransomware
Programmers not mad at AI that could replace them
Twitter tests new remove follower feature
This week’s ‘must-read’ on The Avast Blog