The Morning After: NASA and IBM team up for powerful AI weather model

2 days 5 hours ago

NASA and IBM are building an AI model for weather and climate applications, combining their knowledge and skills in earth science and AI. They say the foundation model (more on that in a bit) should offer “significant advantages over existing technology.” Current AI models, such as GraphCast and FourCastNet, are already generating weather forecasts more quickly than traditional meteorological models. As IBM notes, those are AI emulators rather than foundation models. AI emulators can make weather predictions based on sets of training data, but they don’t have applications beyond that.

The model may predict meteorological phenomena better, inferring high-res information based on low-res data and “identifying conditions conducive to everything from airplane turbulence to wildfires.”

— Mat Smith

The biggest stories you might have missed

Steam’s streaming software now lets you wirelessly play PC VR games on Quest headsets

Bipartisan Senate bill would kill the TSA’s Big Brother airport facial recognition

The best Android phones

Tesla’s long-awaited Cybertruck will start at $60,990 before rebates​​

You can get these reports delivered daily direct to your inbox. Subscribe right here!

Evernote officially limits free users to 50 notes and one measly notebook

‘We recognize these changes may lead you to reconsider your relationship with Evernote.’


Evernote’s new, tightly leashed plan will restrict new and current accounts to 50 notes and one notebook. Existing free customers who exceed those limits can still use their notes, but they’ll need to upgrade to a paid plan to create new ones. Evernote’s premium plans include a $15 monthly Personal plan with 10GB of monthly uploads. That’s a pricey subscription for what is dedicated note cloud storage. When Evernote’s parent company, Bending Spoons, moved its operations from the US and Chile to Europe, it said the app had been “unprofitable for years.” That push into socks didn’t work.

Continue reading.

The US government halts Meta briefings on foreign influence campaigns

Officials have “paused” tips to Meta.

Meta says the government “paused” in July briefings related to foreign election interference, eliminating a key source of information for the company. During a call with reporters, Meta’s head of security policy, Nathaniel Gleicher, declined to speculate on the government’s motivations, but the timing lines up with a court order earlier this year that restricted the Biden Administration’s contact with social media firms.

The disclosure comes as the company ramps up its efforts to prepare for multiple elections in 2024, and the inevitable attempts to manipulate political conversations on Facebook. The company said in its latest report on CIB that China is now the third-most common source of coordinated inauthentic behavior on its platform, behind Russia and Iran.

Continue reading.

Google Messages now lets you choose your own chat bubble colors

But this has nothing to do with messaging iPhones and all that drama.

Google is rolling out a string of updates for the Messages app, including customizable text bubbles and background colors. So, if you really want, you can have blue bubbles in your Android messaging app. You can even have a different color for each chat, which could help prevent you from telling the wrong thing to the wrong person. But none of this means nothing to iPhone users and has nothing to do with the prolonged toing and froing on text message compatibility.

Continue reading.

How OpenAI’s ChatGPT has changed the world in just a year

The generative AI chatbot has helped kickstart a multibillion-dollar industry.

SOPA Images via Getty Images

ChatGPT exploded in popularity, from niche online curio to 100 million monthly active users — the fastest user base growth in the history of the internet. In less than a year, it has earned the backing of Silicon Valley’s biggest firms, as well as being shoehorned into myriad applications from academia and the arts to marketing, medicine, gaming and government. ChatGPT is just about everywhere. Engadget’s Andrew Tarantola looks at the blazing first year of OpenAI’s chatbot.

Continue reading.​​

This article originally appeared on Engadget at https://www.engadget.com/the-morning-after-nasa-and-ibm-team-up-for-powerful-ai-weather-model-121532358.html?src=rss
Mat Smith

Prime members can buy a Blink Video Doorbell and two Outdoor Cameras for $100

2 days 7 hours ago

If you recently moved into a new place or are just looking to update your home's security, now's a good time to do so. Though Black Friday has come and gone, Blink's video doorbell and two fourth-generation outdoor smart security cameras bundle is currently on sale for $100 (the devices add up to $315 if bought separately). There's a small catch, though: the deal is only available to Prime members. 

While Prime members had access to a similar deal back in September, this time around, the two Blink outdoor cameras included are the fourth-generation model. The cameras offer better image quality and low-light sensitivity. They also have an expanded field of vision, 143 degrees compared to their predecessor's 110 degrees. The cameras should function for two years before the battery needs replacing. The bundle includes six double AA lithium batteries, along with one Sync Module 2, one USB cable, three mounting kits and a power adapter. 

Blink's outdoor camera and video doorbell both allow you to hear and speak with whoever is outside. You can also use the doorbell wirelessly by setting up in-app chimes or with a Blink Mini indoor camera. Otherwise, you can choose to hook it up to your existing system. You can store any clips from these devices in the cloud with a 30-day trial of the Blink Subscription Plan included. After that, Blink Plus will cost you $100 annually. 

Follow @EngadgetDeals on Twitter and subscribe to the Engadget Deals newsletter for the latest tech deals and buying advice.

This article originally appeared on Engadget at https://www.engadget.com/prime-members-can-buy-a-blink-video-doorbell-and-two-outdoor-cameras-for-100-103504782.html?src=rss
Sarah Fielding

Huawei is allegedly building a self-sufficient chip network using state investment fund

2 days 12 hours ago

We've seen Huawei's surprising strides with its recent smartphones — especially the in-house 7nm 5G processor within, but apparently the company has been working on something far more significant to bypass the US import ban. According to a new Bloomberg investigation, a Shenzhen city government investment fund created in 2019 has been helping Huawei build "a self-sufficient chip network." 

Such a network would give the tech giant access to enterprises — most notably, the three subsidiaries under a firm called SiCarrier — that are key to developing lithography machines. Lithography, especially the high-end extreme ultraviolet flavor, would usually have to be imported into China, but it's currently restricted by US, Netherlands and Japan sanctions. Huawei apparently went as far as transferring "about a dozen patents to SiCarrier," as well as letting SiCarrier's elite engineers work directly on its sites, which suggests the two firms have a close symbiotic relationship.

Bloomberg's source claims that Huawei has hired several former employees of Dutch lithography specialist, ASML, to work on this breakthrough. The result so far is allegedly the 7nm HiSilicon Kirin 9000S processor fabricated locally by SMIC (Semiconductor Manufacturing International Corporation), which is said to be about five years behind the leading competition (say, Apple Silicon's 3nm process) — as opposed to an eight-year gap intended by the Biden administration's export ban.

Huawei's Mate 60, Mate 60 Pro, Mate 60 Pro+ and Mate X5 foldable all feature this HiSilicon chip, as well as other Chinese components like display panels (BOE), camera modules (OFILM) and batteries (Sunwoda). Huawei having its own network of local enterprises would eventually allow it to rely less on imported components, and potentially even become the halo of the Chinese chip industry — especially in the age of electric vehicles and AI, where more chips are needed than ever (as much as NVIDIA would like to deal with China). That said, Huawei apparently denied that it had been receiving government help to achieve this goal.

Given Huawei's seeming progress, and the fact that China has been pumping billions into its chip industry, the US government will just have to try harder.

This article originally appeared on Engadget at https://www.engadget.com/huawei-is-allegedly-building-a-self-sufficient-chip-network-using-state-investment-fund-051823202.html?src=rss
Richard Lai

TikTok ban in Montana blocked by US judge over free speech rights

2 days 16 hours ago

Montana's unprecedented state-wide ban of Chinese short-video app, TikTok, was supposed to take effect on January 1, 2024, but as reported by Reuters, US District Judge Donald Molloy issued a preliminary injunction just one month ahead to block said ban. This means that for now, ByteDance and app stores are allowed to continue serving TikTok to users within the Montana state, without being fined $10,000 daily from the start date of the ban.

The judge was quoted saying the ban "oversteps state power and infringes on the constitutional rights of users" — echoing the legal challenge filed by five TikTok creators on the day after the bill was signed back in May, as well as another lawsuit filed by the platform's owner, ByteDance, later on in the same month. It was also questionable as to whether Google and Apple could have effectively enforced such a state-wide ban on their app stores.  

The relevant bill was originally drafted based on claims that this Chinese app would share US users' personal data with the Chinese government, to which ByteDance had long denied since the presidency of Donald Trump. "TikTok US user data is stored in the US, with strict controls on employee access," the company claimed back in August 2020 — and again via a new "transparency" push earlier this year, with reference to "Project Texas" for safeguarding US user data with help from Oracle. 

To date, no other US state had passed a bill to bar TikTok. The outcome of Montana's case may hold the key to this Chinese app's fate across the rest of the country.

This article originally appeared on Engadget at https://www.engadget.com/tiktok-ban-in-montana-blocked-by-us-judge-over-free-speech-rights-011846138.html?src=rss
Richard Lai

Tesla's Cybertruck is a dystopian, masturbatory fantasy

2 days 18 hours ago

It’s been four years since Tesla first announced the Cybertruck, a hideously ugly electric pickup truck that didn’t seem to actually improve on EVs or pickups in any meaningful way. Instead, the 6,600-pound mass of “stainless super steel” seems to be more the culmination of one man's bizarre fantasy, and that man just so happened to own an entire company he could leverage to birth that fantasy, with all its sharp angles and unnecessary lighting bars, into reality.

Today, Tesla finally delivered the first, long-delayed production Cybertrucks to 10 buyers in a livestream on CEO Elon Musk’s decimated X platform, the first of an unknown number of wealthy consumers who have bought into his grim vision of the future. It's a car that promises — for only those who can afford them — a blank check for vehicular manslaughter and unnecessary survivability from semi-automatic firearms. Its tagline ("more utility than a truck, faster than a sports car") speaks almost poetically to two distinct but orthogonal archetypes of threatened masculinity: the tacti-cool milspec dork, and the showboating rich guy.

A “bulletproof” body has been a key feature since the Cybertruck's introduction in 2019; today Musk admitted it was there for no good reason. “Why did you make it bulletproof?” Musk said. “Why not?” he said with a broad grin, before metaphorically waving his genitals at the cheering crowd, while also promising metaphorically larger genitals to anyone who buys the Cybertruck. “How tough is your truck?” Musk smirked.

This admission came alongside video footage of a Cybertruck being sprayed with rounds from a .45 caliber tommy gun, a Glock 9mm and a MP5-SD submachine gun, which also uses 9mm rounds. We'd ask Tesla what cartridges they were firing and if they were being shot from within the effective range of any of these weapons, but the company dissolved its PR team in 2019.

It was a stupid but expected bit of showboating from Musk during his rambling presentation. Right before the gunfire demo, Musk touted the truck’s overall toughness, noting that its low center of gravity made it extremely difficult to flip in an accident. A video also showed the Cybertruck barely moving after a much smaller vehicle moving at 38 mph collided with it. To that, Musk commented that “if you’re ever in an argument with another car, you will win,” glibly encouraging Cybertruck owners to engage in such "arguments."

In a country where both traffic fatalities and gun violence have surged in recent years, it’s a little galling to see Musk promoting his vehicle as some sort of tool for rich people to survive the apocalypse, or even just the inconveniences of a world where their lessers occupy space at all. (All-wheel drive Cybertrucks start at about $80,000; a $60,000 RWD model is supposedly arriving in 2025.) “Sometimes you get these late civilization vibes, the apocalypse could come along at any moment, and here at Tesla we have the finest apocalypse technology,” Musk mused.

Beyond that is the simple fact that SUVs and trucks have gotten dramatically bigger and heavier in the past decade or so. EVs naturally weigh more because of their batteries, but auto manufacturers have been making the fronts of cars larger and taller in recent years, too. That’s a combo that makes these vehicles more dangerous for pedestrians and other drivers alike.

“Whatever their nose shape, pickups, SUVs and vans with a hood height greater than 40 inches are about 45 percent more likely to cause fatalities in pedestrian crashes than cars and other vehicles with a hood height of 30 inches or less and a sloping profile,” research from the Insurance Institute for Highway Safety states. It also noted that pedestrian crash deaths have risen 80 percent since a low in 2009. Anyone who walks or bikes around a city has probably felt that danger before, and it’s even more startling when the wall of a truck stops short when you’re crossing the street. Finally, it’s well known that the speed of a car dramatically impacts the survivability of a pedestrian, which isn’t great when an extremely heavy car also can do 0-60 in less than three seconds.

Now that the Cybertruck is nearly ready for public consumption, it looks like Musk has basically built a vehicle that, for a steep price, enables the worst impulses of US drivers and gives them the “freedom” to do whatever they want. It doesn’t matter if the Cybertruck’s lightbar headlights blind the drivers of smaller vehicles; they should get the hell out of the left lane. And if someone else on the road pisses off a Cybertruck driver, who cares? Other drivers should just accept that they’re about to lose a very expensive and potentially life-threatening “argument” with the Cybertruck’s front fender.

This all should have been obvious right from the start. From day one, the Cybertruck has alluded to a cyberpunk future, a genre with cool haircuts and hacking and slightly problematic orientalism, yes — but also one where wealth inequality is even worse than it currently is, and the rules don’t apply to those with money. The implicit promise of the Cybertruck has always been a vehicle that waives societal standards for people who can afford it, and today’s spectacle made that explicit. To that end, maybe this marketing is as much genius as it is nonsense.

“If Al Capone showed up with a Tommy gun and emptied the entire magazine into the car door, you’d still be alive,” Musk crowed at one point, either promising to revive the dead or oblivious to the terrifying number of human beings who use guns to commit acts of violence. I don’t know about you, but I don’t want to live in a world where being swiss cheesed by lethal armaments is something I need to consider when I’m buying a car. Maybe the rich survivalists playing out Blade Runner meets Mad Max in their Cybertrucks haven't considered that when everything burns down, the power grid will go down too.

This article originally appeared on Engadget at https://www.engadget.com/teslas-cybertruck-is-a-dystopian-masturbatory-fantasy-225648188.html?src=rss
Nathan Ingraham

Apple patches two security vulnerabilities on iPhone, iPad and Mac

2 days 19 hours ago

Apple pushed updates to iOS, iPadOS and macOS software today to patch two zero-day security vulnerabilities. The company suggested the bugs had been actively deployed in the wild. “Apple is aware of a report that this issue may have been exploited against versions of iOS before iOS 16.7.1,” the company wrote about both flaws in its security reports. Software updates plugging the holes are now available for the iPhone, iPad and Mac.

Researcher Clément Lecigne of Google’s Threat Analysis Group (TAG) is credited with discovering and reporting both exploits. As Bleeping Computer notes, the team at Google TAG often finds and exposes zero-day bugs against high-risk individuals, like politicians, journalists and dissidents. Apple didn’t reveal specifics about the nature of any attacks using the flaws.

The two security flaws affected WebKit, Apple’s open-source browser framework powering Safari. In Apple’s description of the first bug, it said, “Processing web content may disclose sensitive information.” In the second, it wrote, “Processing web content may lead to arbitrary code execution.”

The security patches cover the “iPhone XS and later, iPad Pro 12.9-inch 2nd generation and later, iPad Pro 10.5-inch, iPad Pro 11-inch 1st generation and later, iPad Air 3rd generation and later, iPad 6th generation and later, and iPad mini 5th generation and later.”

The odds your devices were affected by either of these are extremely minimal, so there’s no need to panic — but, to be safe, it would be wise to update your Apple gear now. You can update your iPhone or iPad immediately by heading to Settings > General > Software Update and tapping the prompt to initiate it. On Mac, go to System Settings > General > Software Update and do the same. Apple’s fixes arrived today in iOS 17.1.2, iPadOS 17.1.2 and macOS Sonoma 14.1.2. 

This article originally appeared on Engadget at https://www.engadget.com/apple-patches-two-security-vulnerabilities-on-iphone-ipad-and-mac-215854473.html?src=rss
Will Shanklin

Tesla's long-awaited Cybertruck will start at $60,990 before rebates

2 days 20 hours ago

After years of production delays, Tesla CEO Elon Musk took to a dimly lit stage on Thursday to hand-deliver the first batch of Cybertruck EVs to their new owners during a delivery event held at the Tesla Gigafactory in Austin, Texas. The company has also finally announced pricing for the luxury electric truck. Prospective buyers can expect to pay anywhere from $60,990 to $100,000 MSRP (and potentially $11,000 less after rebates and tax credits). The company has launched an online configurator tool for those interested in placing an order of their own.

Cybertruck Delivery Event https://t.co/rWd111HvHc

— Tesla (@Tesla) November 29, 2023

Tesla also officially revealed the vehicle's performance specs and model options at the event. The Cybertruck's entry-level version is the $60,990 single-motor rear-wheel drive ($49,890 after "incentives" and an "estimated 3-year gas savings," per the configurator). It will offer an estimated 250 miles of range and a pokey 6.5 second zero-to-60. Who knew steel sheeting would be so heavy? It won't be released until the 2025 model year. 

The mid-level model is the $79,990 all-wheel drive version and sports e-motors on each axle. It weighs just over 6,600 pounds — 1,900 less than the Rivian R1S and nearly 2,500 less than the Hummer EV. "If you are ever in an argument with another car, you will win," Musk said Thursday.

The AWD will offer 340 miles of range, a more respectable 4.1-second zero-to-60 and 600 HP with 7435 lb-ft of torque. Its 11,000-pound towing capacity is a touch more than the Ford Lighting XLT's 10,000-pound maximum, but less than the 14,000-pound figure Musk quoted in 2019.

For $99,990, you can buy the top of the line Cyberbeast — yes, you will have to refer to it as that in public. The Cyberbeast comes equipped with a trio of e-motors that will provide AWD handling, a 320-mile range, 2.6-second sero-to-60, a 130 MPH top speed, 845 horses and 10,296 lb-ft of torque. Despite those impressive specs, the Cyberbeast is stuck with the same 11,000 pound tow limit as the base model. 

Both the Cyberbeast and the AWD iteration will be able to carry 121 cubic feet of cargo and accommodate five adult passengers. The Cybertruck line is compatible with Tesla's supercharger network and can accept up to 250kW maximum, enough to add 128 miles of range for every 15 minutes of charge time. The AWD and Cyberbeast are both currently available to order on Tesla's website, though prospective buyers will need to put down a fully-refundable $250 deposit upon ordering. 

The prices stated Thursday are significantly higher than the $50,000 price range Musk had long said the vehicle would retail for. For comparison, the Ford F-150 Lightning currently starts at $52,000. Rivian's R1S is more in line with the Cybertruck, retailing for $79,500 after its automaker raised prices from $67,500 last year.

Thursday's event comes after four years of development work that has been the subject of both intense scrutiny and promotion, often simultaneously. For example, when Musk first revealed the Cybertruck design in November 2019, he famously had an assistant throw baseballs at the vehicle's "Tesla Armor Glass" windows, which promptly broke from the impact. That snafu clearly got under Musk's skin as he made time during Thursday's Cybertruck delivery event to recreate the stunt, this time, with what appeared to be less-damaging softballs. No windows came to harm during the event. 

The window smash test wasn't the only comparative stunt of the day. Musk dusted off two classics from the 2019 reveal event: a drag race with a Porsche 911 (this time with the Cybertruck hauling a second Porsche), and a towing contest between the Cybertruck and various other light and medium-duty EV and ICE pickups. Wholly unsurprisingly, Tesla's vehicle managed to easily outmatch all of its competitors in each of the tests put on by Tesla.

The Cybertruck has also been the focus of intense marketing efforts by the company with myriad consumer product tie-ins. Tesla promised an electric ATV that would be ready at the truck's launch and was reportedly also considering an electric dirt bike as well. Those did not materialize. Tesla's RC Cybertruck, produced in partnership with Hot Wheels, did make it to market for a cool $400. Hot Wheels followed that up with a far more affordable $100 RV Cyberquad. The company even released a kid-sized Cyberquad, though the rideable toys were swiftly recalled for lacking basic safety features

This article originally appeared on Engadget at https://www.engadget.com/teslas-long-awaited-cybertruck-will-start-at-60990-before-rebates-211751127.html?src=rss
Andrew Tarantola

TikTok's new profile tools are just for musicians

2 days 21 hours ago

TikTok has introduced the Artist Account, which offers up-and-coming musicians new ways to curate their profiles in ways that boost discoverability. The new suite of tools are not just meant for rising stars: established pop icons can also add an artist tag to their profiles, giving their music its own tab next to their videos, likes and reposted content.

To be eligible for an artist tag, TikTok says you will need at least four sounds or songs uploaded to the app. Artists can also pin one of their tunes so it appears first in the music tab. If a musician drops new content, the app will tag songs as ‘new’ for up to 14 days before and up to 30 days after it goes live. Any new tracks will automatically be added to a profile’s music tab.

TikTok says over 70,000 artists are already using the new tools. The app has proven to be a breeding ground for content to go viral for new artists and established music makers alike thanks to the lightning speed of dance and lifestyle video trends. TikTok’s impact on the music industry has been so massive that even streamers like Spotify have looked into experimenting with video-first music discovery feeds.

This article originally appeared on Engadget at https://www.engadget.com/tiktoks-new-profile-tools-are-just-for-musicians-201723244.html?src=rss
Malak Saleh

Steam’s streaming software now lets you wirelessly play PC VR games on Quest headsets

2 days 21 hours ago

One of the key selling points of Meta Quest VR headsets is that they can play PC VR titles, but you have to be physically connected via a link cable to the PC. There are some third-party workarounds that allow for wireless game streaming, like Virtual Desktop, but now Steam has unveiled an official solution.

Steam Link is a tool available for Meta Quest 2, 3 and Pro that wirelessly streams PC VR games from your Steam library directly to the headset, so you can continue to avoid cables like the plague. The free app already exists, but has been used to stream Steam games onto phones, tablets and TVs. This is the first time it’s available for VR titles.

There’s one major caveat. Just like Virtual Desktop, you still need a capable PC that can run high-end VR games. You just won’t need the link cable. It’s possible this service can work via cloud computing platforms, but the results are likely to be janky at best. Steam outlines recommended PC specs, suggesting the NVIDIA GTX970 GPU or better, 16GB of RAM and Windows 10 or newer.

Beyond the PC, you also need a 5GHz WiFi router with both the headset and the computer connected to the same network. You can download the Steam Link app directly from the Quest store to get started. This may not be the biggest deal in the world to folks who already use Virtual Desktop, but anything that gets more people into Half Life: Alyx is a good thing.

This article originally appeared on Engadget at https://www.engadget.com/steams-streaming-software-now-lets-you-wirelessly-play-pc-vr-games-on-quest-headsets-200502768.html?src=rss
Lawrence Bonk

Call of Duty games start landing on NVIDIA GeForce Now

2 days 22 hours ago

One of the major concessions Microsoft made to regulators to get its blockbuster acquisition of Activision Blizzard over the line was agreeing to let users of third-party cloud services stream Xbox-owned games. Starting today, you can play three Call of Duty games via NVIDIA GeForce Now: Modern Warfare 3, Modern Warfare 2 and Warzone.

They're the first Activision games to land on GeForce Now since Microsoft closed the $68.7 billion Activision deal in October. Activision Blizzard games were previously available on GeForce Now but only briefly, as the publisher pulled them days after the streaming service went live for all users in early 2020.

Microsoft first made its first-party games available on GeForce Now this year, beginning with Gears 5 in May. More recently, Microsoft started allowing GeForce Now users to stream PC Game Pass titles and Microsoft Store purchases.

Call of Duty titles are major additions, though, especially since that means Warzone fans can play the battle royale on their phone or tablet wherever they are without having to pay anything extra (free GeForce Now users are limited to one hour of gameplay per session). If you've bought MW2 or MW3 on Steam, you can play those through GeForce Now as well. NVIDIA notes that older CoD titles will be available through GeForce Now later.

Another key concession Microsoft made to appease UK regulators was to sell the cloud gaming rights for Activision Blizzard titles to Ubisoft. However, as evidenced here, Microsoft will still honor the agreements it made directly with various cloud gaming services.

This article originally appeared on Engadget at https://www.engadget.com/call-of-duty-games-start-landing-on-nvidia-geforce-now-195040692.html?src=rss
Kris Holt

Formula E now lets you stream every race from its first nine seasons for free

2 days 22 hours ago

There's still time to get acquainted with Formula E before the new season begins in January. To help with that, the all-electric racing series has opened up its vault and made every race from its first nine seasons available to stream for free. Starting with the first event in Beijing in 2014 through this past season's finale in London, there's a lot to relive or watch for the first time. If you're trying to stream them all, that's 90 hours of action over 116 races you have to look forward to.

Formula E's new Race Replay archive is available for free via it's website and mobile app. All you need to do in order to gain access to the back catalog is to register for an account. What's more, the series says every race from 2024's Season 10 will be available seven days after airing live. Even if you don't have access to the required channels or platforms needed to watch live next year, you'll still be able to follow along a few days after each event.

When the lights go out in Mexico City, Formula E will offer fans expanded viewing options in 2024. Roku will stream 11 races live through its Roku Channel for free. That platform will also offer previews, replays and other commentary in addition to the live events. Paramount+ will stream five races live as simulcasts with CBS, the broadcaster that has been home to Formula E in the US for a while now. 

Season 10 begins January 13 in Mexico City before a double-header in Diriyah, Saudi Arabia later in the month. 17 total races are scheduled for 2024, including a US stop in Portland that has been expanded to its own double-header weekend after debuting last season. Formula E completed its preseason testing in Valencia in late October and you can read our key takeaways from that event here

This article originally appeared on Engadget at https://www.engadget.com/formula-e-now-lets-you-stream-every-race-from-its-first-nine-seasons-for-free-193820963.html?src=rss
Billy Steele

Bipartisan Senate bill would kill the TSA’s ‘Big Brother’ airport facial recognition

2 days 22 hours ago

US Senators John Kennedy (R-LA) and Jeff Merkley (D-OR) introduced a bipartisan bill Wednesday to end involuntary facial recognition screening at airports. The Traveler Privacy Protection Act would block the Transportation Security Administration (TSA) from continuing or expanding its facial recognition tech program. It would also require the government agency to explicitly receive congressional permission to renew it, and it would have to dispose of all biometric data within three months.

Senator Merkley described the TSA’s biometric collection practices as the first steps toward an Orwellian nightmare. “The TSA program is a precursor to a full-blown national surveillance state,” Merkley wrote in a news release. “Nothing could be more damaging to our national values of privacy and freedom. No government should be trusted with this power.” Other Senators supporting the bill include Edward J. Markey (D-MA), Roger Marshall (R-KS), Bernie Sanders (I-VT) and Elizabeth Warren (D-MA).

The TSA began testing facial recognition at Los Angeles International Airport (LAX) in 2018. The agency’s pitch to travelers framed it as an exciting new high-tech feature, promising a “biometrically-enabled curb-to-gate passenger experience.” The TSA said this summer it planned to expand the program to over 430 US airports within the next few years.

I was back at Washington National Airport this month, and @TSA was up to their old tricks—making it unclear that you ARE able to opt out of using facial recognition technology. I’ll keep holding them accountable. pic.twitter.com/absGn5v1Q3

— Senator Jeff Merkley (@SenJeffMerkley) September 25, 2023

The program at least technically allows travelers to opt-out, but that process isn’t always transparent in practice. Merkley posted the video above to X in September, demonstrating how agents guided travelers to the facial scanner without mentioning that it’s optional. No signs near the booths said it was optional or explicitly mentioned the gathering of facial data, either. The booths were arranged so that flyers would have difficulty entering their driver’s license or ID (required) without stepping in front of the facial scanner.

Advocacy groups supporting the bill include the ACLU, Electronic Privacy Information Center and Public Citizen. “The privacy risks and discriminatory impact of facial recognition are real, and the government’s use of our faces as IDs poses a serious threat to our democracy,” wrote Jeramie Scott, Senior Counsel and Director of EPIC’s Project on Surveillance Oversight, in Markley’s press release. “The TSA should not be allowed to unilaterally subject millions of travelers to this dangerous technology.”

“Every day, TSA scans thousands of Americans’ faces without their permission and without making it clear that travelers can opt out of the invasive screening,” Sen. Kennedy wrote in a separate news release. “The Traveler Privacy Protection Act would protect every American from Big Brother’s intrusion by ending the facial recognition program.”

This article originally appeared on Engadget at https://www.engadget.com/bipartisan-senate-bill-would-kill-the-tsas-big-brother-airport-facial-recognition-191010937.html?src=rss
Will Shanklin

JBL Authentics 300 review: Alexa and Google Assistant coexisting

2 days 22 hours ago

Several companies have taken shots at Sonos over the years when it comes to multi-room audio and self-tuning speakers with built-in voice assistants. These devices are a lot more common in 2023 than they used to be, so there’s a whole host of options if you’re looking for alternatives to the Move or Era. JBL is the latest to give it a go with new additions to its Authentics line of speakers. While audio may be its primary use, these devices are the first to run two voice assistants simultaneously without having to switch from one to the other. And on the Authentics 300 ($450), you get a portable unit that doesn’t have to stay parked on a shelf.


Most wireless JBL speakers fit into three categories. They’re either rugged and compact, modern-looking boomboxes or internally-lit party units. For this new Authentics series, the company opted for a more refined design: all black with a gold frame around the front speaker grille. It’s certainly an aesthetic that fits in nicely on a shelf, without the raucous palette of some of the company’s smaller options. All three of the Authentics speakers look almost exactly the same with the main difference being size, although the 300 does have a boombox-like rotating handle the other two don’t. That’s because it’s the only portable option in the range with a built-in battery.

JBL describes the Authentics look as “retro,” but I’m not sure I agree. Sure, there’s a classic vibe thanks to the ‘70s-inspired Quadrex grille the company has employed in the past, but the finer details and onboard controls are decidedly modern. Speaking of controls, up top you’ll find volume, treble and bass knobs that illuminate the level as you turn them. Pressing in the center of the volume dial gives you the playback controls. There are also Bluetooth, power and Moment buttons along with a thin light bar that indicates charging status when the speaker is plugged in. Around back is a microphone mute switch, along with Ethernet, 3.5mm aux, USB-C and power ports.

Software and features

Photo by Billy Steele/Engadget

The features and settings for the Authentics speakers are managed inside the JBL One app. Here, you’re greeted with a list of the company’s products you own as well as their connected status, battery level and whatever media is playing on the device. After selecting the Authentics 300, JBL dumps you into the specifics, with battery level once again visible up top. A media player is just below, complete with the ability to sync Amazon Music, Tidal, Napster, Qobuz, TuneIn, iHeartRadio and Calm Radio so you can play them directly inside this app.

JBL offers some limited EQ customization. There’s a manual slider with options for bass, mid and treble, but that’s it. You won’t find any carefully-tuned presets or the ability to make more detailed adjustments along the curve. To get to your tunes quickly, JBL offers a feature called Moment. Accessible via the heart button on the speaker, this allows you to save a favorite album or playlist from the app’s list of supported streaming services. You can also specify volume and auto-off timing during setup.

Lastly, a word on streaming music over Wi-Fi. The Authentics line supports a range of options here, including AirPlay, Chromecast, Alexa, Spotify Connect and Tidal Connect, all of which are more convenient than swiping over to the Bluetooth menu and pairing the speaker every time you use it. With Wi-Fi, playing music on the Authentics devices are just a couple of taps away inside of the app where you’re browsing and selecting music or podcasts from. The speakers also support multi-room audio via AirPlay, Alexa and the Google Home app

Double assistants, double the fun

Photo by Billy Steele/Engadget

JBL says the Authentics series is the first set of speakers to run two voice assistants simultaneously. Each of the three units can employ both Alexa and Google Assistant without you having to pick one or the other beforehand. This opens up availability across compatible smart home devices and it means your speaker choice isn’t as limited by your go-to assistant.

The speaker never had trouble hearing my commands and it didn’t mistake a query for one assistant with a question for the other. When you ask Google Assistant for help, a white light shows at the top center of the speaker grille. Summon Alexa and that LED burns blue until your convo is over. When you mute the microphones with the switch on the back of the 300, that light glows red and remains until you turn them back on. As is the case with any smart speaker, the voice command limitations are the general hindrances of the assistants themselves rather than any shortfalls of the speaker.

Sound quality

The Authentics 300 really shines with more mellow, chill music like jazz, bluegrass and acoustic-driven country. There’s a warm inviting sound with great clarity across those styles. When you jump to the full band chaos of metal and hardcore, or even the guitar-heavy but mellifluous tones of Chris Stapleton, the speaker’s tuning overemphasizes vocals and the lack of bassy thump creates a muddy overall sound.

Sure, you can dial up the bass with the physical controls or the EQ in the app, but that doesn’t add the kind of deep low-end that would open up the soundstage. It does improve the overall tuning of albums like Stapleton’s Higher, but there’s still an overemphasis on vocals. You can really hear the impact on The Killer’s Rebel Diamonds as Brandon Flowers almost entirely drowns out the backing synth on “Jenny Was A Friend Of Mine” from Hot Fuss.

At times though, the Authentics 300 is a joy to listen to. Put on some Miles Davis and the speaker is at its best. Ditto for the bluegrass of Nickel Creek, the mellow country tunes of Charles Wesley Godwin and classic Christmas mixes. However, the inconsistency across styles is frustrating. Interestingly, JBL says the Authentics speakers offer automatic self-tuning every time you power them on, but I didn’t notice much difference as I moved the 300 around.

Battery life

Photo by Billy Steele/Engadget

JBL says the Authentics 300 will last up to eight hours on a charge. Within two minutes of unplugging, the JBL One app already had the battery level down two percent while playing music via AirPlay 2, at about 30 percent volume. That may seem like a low level, but it’s good for “working music” on this speaker. After 30 minutes, the app was showing 88 percent, but things slowed down and I managed to still have 24 percent remaining when the eight-hours were up. During a test over Bluetooth, the percentages fell in a similar fashion, but I had no problem making it to eight hours at 50 percent volume (Bluetooth was quieter than AirPlay at 30 percent).

JBL does offer a Battery Saving Mode to help you maximize playtime when you’re away from home. This setting “optimizes” both volume and bass to extend battery life, according to the company. There’s also an optional automatic power off feature that kicks in at either 15 minutes, 30 minutes or an hour when you’re not connected to power and audio is no longer playing.

The competition

JBL offers two alternatives to the Authentics 300 within the same speaker range. The smaller Authentics 200 ($350) is more compact, but not portable, while the larger 500 ($700) is a high-fidelity unit with support for Dolby Atmos. Both still run two voice assistants at the same time and have both Bluetooth and Wi-Fi, along with everything else the Authentics line offers. In order to support that immersive audio, the Authentics 500 has more drivers than the other two, with three 25mm tweeters, three 2.75-inch mid-range and a 6.5-inch subwoofer. I look forward to seeing if the extra components and added 170 watts of output power improve sound quality, but it only has slightly lower frequency response than the 300 (40Hz vs. 45Hz).

If you’re looking for something portable that can also pull double duty at home, the Sonos Move 2 is a solid option. It’s too big to haul around with ease, but it does support both Bluetooth and Wi-Fi along with improved sound and better battery life compared to version 1.0. There’s also startling loudness and a durable design. What’s more, it’s the same price as the Authentics 300 at $449. For something more stationary and immersive, you could get the Sonos Era 300 without paying more. My colleague Nathan Ingraham noted the excellent sound quality on this unit during his review, but he did encounter inconsistent performance when it came to spatial audio. There’s also no Google Assistant support on this model.


When I try to come up with a final verdict on the Authentics 300, I find myself running in circles. For every thing I like about the speaker, there’s immediately something that I don’t. The company certainly deserves some kudos for being the first to run two assistants at the same time and for figuring out how to do that with no confusion or headaches. However, the inconsistent sound quality is a major problem, especially on a $450 speaker. And while the device offers better-than-advertised battery life, it’s larger size makes portability an issue. So unless you absolutely need to seamlessly switch between Alexa and Google Assistant, there are better-sounding options.

This article originally appeared on Engadget at https://www.engadget.com/jbl-authentics-300-review-alexa-and-google-assistant-coexisting-190036434.html?src=rss
Billy Steele

Meta sues FTC to block new restrictions on monetizing kids’ data

2 days 23 hours ago

Meta has sued the Federal Trade Commission (FTC) in an attempt to stop regulators from reopening a landmark $5 billion privacy settlement from 2020 and to allow it to monetize kids’ data across apps like Facebook, Instagram and Whatsapp. This comes after a federal judge ruled on Monday that the FTC would be allowed to expand on 2020’s privacy settlement, paving the way for the agency to propose tough new rules on how the social media giant could operate in the wake of the Cambridge Analytica scandal.

Today’s lawsuit demands an immediate stop to the FTC’s proceedings, calling it an “obvious power grab” and an “unconstitutional adjudication by fiat.” A Meta spokesperson even referred to the FTC as “prosecutor, judge, and jury in the same case”, as reported by Bloomberg. This is the second attempt by Facebook’s parent company to stop the sanctions in court.

The FTC, for its part, says that Meta has repeatedly violated the terms of 2020’s settlement regarding user privacy. The agency also says that the company has violated the Children’s Online Privacy Protection Act (COPPA) by monetizing the data of younger users. The FTC has already been given the go ahead by a judge to restrict this type of monetization, a decision Meta hopes to overturn.

The FTC also seeks to implement new restrictions that limit Meta’s use of facial recognition, as well as a complete moratorium on new products and services until a third-party completes an audit to determine if the company’s complying with its privacy obligations.

“Facebook has repeatedly violated its privacy promises,” Samuel Levine, director of the FTC’s Bureau of Consumer Protection, said in a statement. “The company’s recklessness has put young users at risk, and Facebook needs to answer for its failures.” To that end, multiple states have sued Meta to stop the monetization of children’s data, along with the EU.

The FTC has been a consistent thorn in Meta’s side, as the agency tried to stop the company’s acquisition of VR software developer Within on the grounds that the deal would deter "future innovation and competitive rivalry." The agency dropped this bid after a series of legal setbacks. It also opened up an investigation into the company’s VR arm, accusing Meta of anti-competitive behavior.

Corporations have been all over the FTC lately in attempts to paint the agency as a prime example of government overreach. Beyond Meta, biotech giant Illumina is suing the FTC to halt a decision that stops it from a $7 billion acquisition of the cancer detection startup Grail.

This article originally appeared on Engadget at https://www.engadget.com/meta-sues-ftc-to-block-new-restrictions-on-monetizing-kids-data-185051764.html?src=rss
Lawrence Bonk

Can digital watermarking protect us from generative AI?

2 days 23 hours ago

The Biden White House recently enacted its latest executive order designed to establish a guiding framework for generative artificial intelligence development — including content authentication and using digital watermarks to indicate when digital assets made by the Federal government are computer generated. Here’s how it and similar copy protection technologies might help content creators more securely authenticate their online works in an age of generative AI misinformation.

A quick history of watermarking

Analog watermarking techniques were first developed in Italy in 1282. Papermakers would implant thin wires into the paper mold, which would create almost imperceptibly thinner areas of the sheet which would become apparent when held up to a light. Not only were analog watermarks used to authenticate where and how a company’s products were produced, the marks could also be leveraged to pass concealed, encoded messages. By the 18th century, the technology had spread to government use as a means to prevent currency counterfeiting. Color watermark techniques, which sandwich dyed materials between layers of paper, were developed around the same period.

Though the term “digital watermarking” wasn’t coined until 1992, the technology behind it was first patented by the Muzac Corporation in 1954. The system they built, and which they used until the company was sold in the 1980s, would identify music owned by Muzac using a “notch filter” to block the audio signal at 1 kHz in specific bursts, like Morse Code, to store identification information.

Advertisement monitoring and audience measurement firms like the Nielsen Company have long used watermarking techniques to tag the audio tracks of television shows to track and understand what American households are watching. These steganographic methods have even made their way into the modern Blu-Ray standard (the Cinavia system), as well as in government applications like authenticating drivers licenses, national currencies and other sensitive documents. The Digimarc corporation, for example, has developed a watermark for packaging that prints a product’s barcode nearly-invisibly all over the box, allowing any digital scanner in line of sight to read it. It’s also been used in applications ranging from brand anti-counterfeiting to enhanced material recycling efficiencies.

The here and now

Modern digital watermarking operates on the same principles, imperceptibly embedding added information onto a piece of content (be it image, video or audio) using special encoding software. These watermarks are easily read by machines but are largely invisible to human users. The practice differs from existing cryptographic protections like product keys or software protection dongles in that watermarks don’t actively prevent the unauthorized alteration or duplication of a piece of content, but rather provide a record of where the content originated or who the copyright holder is.

The system is not perfect, however. “There is nothing, literally nothing, to protect copyrighted works from being trained on [by generative AI models], except the unverifiable, unenforceable word of AI companies,” Dr. Ben Zhao, Neubauer Professor of Computer Science at University of Chicago, told Engadget via email.

“There are no existing cryptographic or regulatory methods to protect copyrighted works — none,” he said. “Opt-out lists have been made made a mockery by stability.ai (they changed the model name to SDXL to ignore everyone who signed up to opt out of SD 3.0), and Facebook/Meta, who responded to users on their recent opt-out list with a message that said ‘you cannot prove you were already trained into our model, therefore you cannot opt out.’”

Zhao says that while the White House's executive order is “ambitious and covers tremendous ground,” plans laid out to date by the White House have lacked much in the way of “technical details on how it would actually achieve the goals it set.”

He notes that “there are plenty of companies who are under no regulatory or legal pressure to bother watermarking their genAI output. Voluntary measures do not work in an adversarial setting where the stakeholders are incentivized to avoid or bypass regulations and oversight.”

“Like it or not, commercial companies are designed to make money, and it is in their best interests to avoid regulations,” he added.

We could also very easily see the next presidential administration come into office and dismantle Biden’s executive order and all of the federal infrastructure that went into implementing it, since an executive order lacks the constitutional standing of congressional legislation. But don’t count on the House and Senate doing anything about the issue either.

“Congress is deeply polarized and even dysfunctional to the extent that it is very unlikely to produce any meaningful AI legislation in the near future,” Anu Bradford, a law professor at Columbia University, told MIT Tech Review. So far, enforcement mechanisms for these watermarking schemes have been generally limited to pinky swears by the industry’s major players.

How Content Credentials work

With the wheels of government turning so slowly, industry alternatives are proving necessary. Microsoft, the New York Times, CBC/Radio-Canada and the BBC began Project Origin in 2019 to protect the integrity of content, regardless of the platform on which it’s consumed. At the same time, Adobe and its partners launched the Content Authenticity Initiative (CAI), approaching the issue from the creator’s perspective. Eventually CAI and Project Origin combined their efforts to create the Coalition for Content Provenance and Authenticity (C2PA). From this coalition of coalitions came Content Credentials (“CR” for short), which Adobe announced at its Max event in 2021. 

CR attaches additional information about an image whenever it is exported or downloaded in the form of a cryptographically secure manifest. The manifest pulls data from the image or video header — the creator’s information, where it was taken, when it was taken, what device took it, whether generative AI systems like DALL-E or Stable Diffusion were used and what edits have been made since — allowing websites to check that information against provenance claims made in the manifest. When combined with watermarking technology, the result is a unique authentication method that cannot be easily stripped like EXIF and metadata (i.e. the technical details automatically added by the software or device that took the image) when uploaded to social media sites (on account of the cryptographic file signing). Not unlike blockchain technology! 

Metadata doesn’t typically survive common workflows as content is shuffled around the internet because, Digimarc Chief Product Officer Ken Sickles explained to Engadget, many online systems weren’t built to support or read them and so simply ignore the data.

“The analogy that we've used in the past is one of an envelope,” Chief Technology Officer of Digimarc, Tony Rodriguez told Engadget. Like an envelope, the valuable content that you want to send is placed inside “and that's where the watermark sits. It's actually part of the pixels, the audio, of whatever that media is. Metadata, all that other information, is being written on the outside of the envelope.”

Should someone manage to remove the watermark (turns out, not that difficult, just screenshot the image and crop out the icon) the credentials can be reattached through Verify, which runs machine vision algorithms against an uploaded image to find matches in its repository. If the uploaded image can be identified, the credentials get reapplied. If a user encounters the image content in the wild, they can check its credentials by clicking on the CR icon to pull up the full manifest and verify the information for themselves and make a more informed decision about what online content to trust.

Sickles envisions these authentication systems operating in coordinating layers, like a home security system that pairs locks and deadbolts with cameras and motion sensors to increase its coverage. “That's the beauty of Content Credentials and watermarks together," Sickles said. "They become a much, much stronger system as a basis for authenticity and understanding provenance around an image” than they would individually." Digimarc freely distributes its watermark detection tool to generative AI developers, and is integrating the Content Credentials standard into its existing Validate online copy protection platform.

In practice, we’re already seeing the standard being incorporated into physical commercial products like the Leica M11-P which will automatically affix a CR credential to images as they’re taken. The New York Times has explored its use in journalistic endeavors, Reuters employed it for its ambitious 76 Days feature and Microsoft has added it to Bing Image Creator and Bing AI chatbot as well. Sony is reportedly working to incorporate the standard in its Alpha 9 III digital cameras, with enabling firmware updates Alpha 1 and Alpha 7S III models arriving in 2024. CR is also available in Adobe’s expansive suite of photo and video editing tools including Illustrator, Adobe Express, Stock and Behance. The company’s own generative AI, Firefly, will automatically include non-personally identifiable information in a CR for some features like generative fill (essentially noting that the generative feature was used, but not by whom) but will otherwise be opt-in.

That said, the C2PA standard and front-end Content Credentials are barely out of development and currently exceedingly difficult to find on social media. “I think it really comes down to the wide-scale adoption of these technologies and where it's adopted; both from a perspective of attaching the content credentials and inserting the watermark to link them,” Sickles said.

Nightshade: The CR alternative that’s deadly to databases

Some security researchers have had enough waiting around for laws to be written or industry standards to take root, and have instead taken copy protection into their own hands. Teams from the University of Chicago’s SAND Lab, for example, have developed a pair of downright nasty copy protection systems for use specifically against generative AIs.

Zhao and his team have developed Glaze, a system for creators that disrupts a generative AI’s style of mimicry (by exploiting the concept of adversarial examples). It can change the pixels in a given artwork in a way that is undetectable by the human eye but which appear radically different to a machine vision system. When a generative AI system is trained on these "glazed" images, it becomes unable to exactly replicate the intended style of art — cubism becomes cartoony, abstract styles are transformed into anime. This could prove a boon to well-known and often-imitated artists especially, in keeping their branded artistic styles commercially safe.

While Glaze focuses on preventative actions to deflect the efforts of illicit data scrapers, SAND Lab’s newest tool is whole-heartedly punitive. Dubbed Nightshade, the system will subtly change the pixels in a given image but instead of confusing the models it's trained with like Glaze does, the poisoned image will corrupt the training database its ingested into wholesale, forcing developers to go back through and manually remove each damaging image to resolve the issue — otherwise the system will simply retrain on the bad data and suffer the same issues again.

The tool is meant as a “last resort” for content creators but cannot be used as a vector of attack. “This is the equivalent of putting hot sauce in your lunch because someone keeps stealing it out of the fridge,” Zhao argued.

Zhao has little sympathy for the owners of models that Nightshade damages. “The companies who intentionally bypass opt-out lists and do-not-scrape directives know what they are doing,” he said. “There is no ‘accidental’ download and training on data. It takes a lot of work and full intent to take someone’s content, download it and train on it.”

This article originally appeared on Engadget at https://www.engadget.com/can-digital-watermarking-protect-us-from-generative-ai-184542396.html?src=rss
Andrew Tarantola

YouTube Music brings personalized album art to its 2023 Recap

2 days 23 hours ago

YouTube Music users who have seen their Spotify- and Apple Music-using friends share their listening stats from this year can now join the party. YouTube Music Recap is now live and you can access it from the 2023 Recap page in the app. You'll be able to see your top artists, songs, moods, genres, albums, playlists and more from 2023. There's also the option to view your Recap in the main YouTube app, along with some other new features for 2023.

This year, you'll be able to add custom album art. YouTube will create this using your top song and moods from the year, as well as your energy score. The platform will mash together colors, vibes and visuals to create a representation of your year in music.

YouTube Music

YouTube says another feature will match your mood with your top songs of the year. You might see, for instance, the percentages of songs you listened to that are classed as upbeat, fun, dancey or chill. Last but not least, you can use snaps from Google Photos to create a customized visual that sums up your year in music (and perhaps your year in travel too).

This article originally appeared on Engadget at https://www.engadget.com/youtube-music-brings-personalized-album-art-to-its-2023-recap-182904330.html?src=rss
Kris Holt

Evernote officially limits free users to 50 notes and one measly notebook

3 days ago

Evernote has confirmed the service’s tightly leashed new free plan, which the company tested with some users earlier this week. Starting December 4, the note-taking app will restrict new and current accounts to 50 notes and one notebook. Existing free customers who exceed those limits can still view, edit, delete and export their notes, but they’ll need to upgrade to a paid plan (or delete enough old ones) to create new notes that exceed the new confines.

The company says most free accounts are already inside those lines. “When setting the new limits, we considered that the majority of our Free users fall below the threshold of fifty notes and one notebook,” the company wrote in an announcement blog post. “As a result, the everyday experience for most Free users will remain unchanged.” Engadget reached out to Evernote to clarify whether “the majority of Free users” staying within those bounds includes long-dormant accounts that may have tried the app for a few minutes a decade ago and never logged in again. We’ll update this article if we hear back.

Evernote’s premium plans, now practically essential for anything more than minimal use, include a $15 monthly Personal plan with 10GB of monthly uploads. You can double that to 20GB (and get other perks) with an $18 tier. It also offers annual versions of those plans for $130 and $170, respectively.

The company acknowledged in its announcement post that “these changes may lead you to reconsider your relationship with Evernote.” Leading alternatives with more bountiful free plans include Notion, Microsoft OneNote, Google Keep, Bear (Apple devices only), Obsidian and SimpleNote.

Earlier this year, Evernote’s parent company, Bending Spoons, moved its operations from the US and Chile to Europe, laying off nearly all of the note-taking app’s employees. When doing so, it said the app had been “unprofitable for years.”

This article originally appeared on Engadget at https://www.engadget.com/evernote-officially-limits-free-users-to-50-notes-and-one-measly-notebook-174436735.html?src=rss
Will Shanklin

Expressive E Osmose review: A game-changing MPE keyboard, but a frustrating synthesizer

3 days ago

When I first got to see the Expressive E Osmose way back in 2019, I knew it was special. In my 15-plus years covering technology, it was one of the only devices I’ve experienced that actually had the potential to be truly “game changing.” And I’m not being hyperbolic.

But, that was four years ago, almost to the day. A lot has changed in that time. MPE (MIDI Polyphonic Expression) has gone from futuristic curiosity to being embraced by big names like Ableton and Arturia. New players have entered and exited the scene. More importantly, the Osmose is no longer a promising prototype, but an actual commercial product. The questions, then, are obvious: Does the Osmose live up to its potential? And, does it seem as revolutionary today as it did all those years ago? The answers, however, are less clear.

Terrence O'Brien / Engadget

What sets the Osmose ($1,799) apart from every other MIDI controller and synthesizer (MPE or otherwise) is its keybed. At first glance, it looks like almost any other keyboard, albeit a really nice one. The body is mostly plastic, but it feels solid and the top plate is made of metal. (Shoutout to Expressive E, by the way, for building the OSMOSE out of 66 percent recycled materials and for making the whole thing user repairable — no glue or speciality screws to be found.)

The keys themselves have this lovely, almost matte finish and a healthy amount of heft. It’s a nice change of pace from the shiny, springy keys on even some higher-end MIDI controllers. But the moment you press down on a key you’ll see what sets it apart — the keys move side to side. And this is not because it’s cheaply assembled and there’s a ton of wiggle. This is a purposeful design. You can bend notes (or control other parameters) by actually bending the keys, much like you would on a stringed instrument.

This is huge for someone like me who is primarily a guitar player. Bending strings and wiggling my fingers back and forth to add vibrato comes naturally. And, as I mentioned in my review of Roli’s Seaboard Rise 2, I find myself doing this even on keyboards where I know it will have no effect. It’s a reflex.

It’s a very simple thing to explain, but very difficult to encapsulate its effect on your playing. It’s all of the same things that make playing the Seaboard special: the slight pitch instability from the unintentional micro movements of your fingers, the ability to bend individual notes for shifting harmonies and the polyphonic aftertouch that allows you to alter things like filter cutoff on a per-note basis.

These tiny changes in tuning and expression add an almost ineffable fluidity to your playing. In particular, for sounds based on acoustic instruments like flutes and strings, it adds an organic element missing from almost every other synthesizer. There is a bit of a learning curve, but I got the hang of it after just a few days.

What separates it from the Roli, though, is its formfactor. While the Seaboard is keyboard-esque, it’s still a giant squishy slab of silicone. It might not appeal to someone who grew up taking piano lessons every week. The Osmose, on the other hand, is a traditional keyboard, with full-sized keys and a very satisfying action. It’s probably the most familiar and approachable implementation of MPE out there.

If you are a pianist, or an accomplished keyboard player, this is probably the MPE controller you’ve been waiting for. And it’s hands-down one of the best on the market.

Where things get a little dicier is when looking at the Osmose as a standalone synthesizer. But let’s start where it goes right: the interface. The screen to the left of the keyboard is decently sized (around 4 inches) and easy to read at any angle. There are even some cute graphics for parameters such as timbre (a log), release (a yo-yo) and drive (a steering wheel).

Terrence O'Brien / Engadget

There aren’t a ton of hands-on controls, but menu diving is kept to a minimum with some smart organization. The four buttons across the top of the screen take you to different sections for presets, synth (parameters and macros), sensitivity (MPE and aftertouch controls) and playing (mostly just for the arpeggiator at the moment). Then to the left of the screen there are two encoders for navigating the submenus, and the four knobs below control whatever option is listed above them on the screen. So, no, you’re not going to be doing a lot of live tweaking, but you also won’t spend 30 minutes trying to dial in a patch.

Part of the reason you won’t spend 30 minutes dialing in a patch is because there really isn’t much to dial in. The engine driving the Osmose is Haken Audio’s EaganMatrix and Expressive E keeps most of it hidden behind six macro controls. In fact, you can’t really design a patch from scratch — at least not the synth directly. You need to download the Haken Editor, which requires Max (not the streaming service), to do serious sound design. Then you need to upload your new patch to the Osmose over USB. Other than that, you’re stuck tweaking presets.

Terrence O'Brien / Engadget

This isn’t necessarily a bad thing because, frankly, EaganMatrix feels less like a musical instrument and more like a PHD thesis. It is undeniably powerful, but it’s also confusing as hell. Expressive E even describes it as “a laboratory of synthesis,” and that seems about right; patching in the EaganMatrix is like doing science. Except, it’s not the fun science you see on TV with fancy machines and test tubes. Instead it’s more like the daily grind of real life science where you stare at a nearly inscrutable series of numbers, letters, mathematical constants and formulas.

I couldn’t get the Osmose and Haken Editor to talk to each other on my studio laptop (a five-year-old Dell XPS), though I did manage to get it to work on my work-issue MacBook. That being said, it was mostly a pointless endeavor. I simply can’t wrap my head around the EaganMatrix. I was able to build a very basic patch with the help of a tutorial, but I couldn’t actually make anything usable.

There are some presets available on Patchstorage, but the community is nowhere near as robust as what you’d find for the Organelle or ZOIA. And, it’s not obvious how to actually upload those handful of presets to the Osmose. You can drag and drop the .mid files you download to the empty slots across the top of the Haken Editor and that will add them to the Osmose's user presets. But you wont actually see that reflected on the Osmose itself until you turn it off and turn it back on.

Honestly, many of the presets available on Patchstorage cover the same ground as 500 or so factory ones that ship with the Osmose. And it’s while browsing those hundreds of presets that both the power and the limitations of the EaganMatrix become obvious. It’s capable of covering everything from virtual analog, to FM to physical modeling, and even some pseudo-granular effects. Its modular, matrix-based patching system is so robust that it would almost certainly be impossible to recreate physically (at least without spending thousands of dollars).

Now, this is largely a matter of taste, but I find the sounds that come out of this obviously over-powered synth often underwhelming. They’re definitely unique and in some cases probably only possible with the EaganMatrix. But the virtual analog patches aren’t very “analog,” the FM ones lack the character of a DX7 or the modern sheen of a Digitone, and the bass patches could use some extra oomph. Sometimes patches on the Osmose feel like tech demos rather than something you’d actually use musically.

Terrence O'Brien / Engadget

That’s not to say there’s no good presets. There are some solid analog-ish sounds and there are a few decent FM pads. But it’s the physical modeling patches where EaganMatrix is at its best. They definitely land in a kind of uncanny valley, though — not convincing enough to be mistaken for the real thing, but close enough that it doesn’t seem quite right coming out of a synthesizer.

Still, the way tuned drums and plucked or bowed strings are handled by Osmose is impressive. Quickly tapping a key can get you a ringing resonant sound, while holding it down mutes it. Aftertouch can be used to trigger repeated plucks that increase in intensity as you press harder. And bowed patches can be smart enough to play notes within a certain range of each other as legato, while still allowing you to play more spaced out chords with your other hand. (This latter feature is called Pressure Glide and can be fine tuned to suit your needs.)

The level of precision with which you can gently coax sound out of some presets with the lightest touch is unmatched by any synth or MIDI controller I’ve ever tested. And that becomes all the more shocking when you realize that very same patch can also be a percussive blast if you strike the keys hard.

But, at the end of the day, I rarely find myself reaching for Osmose — at least not as a synthesizer. I’ve been testing one for a few months now, and while I have used it quite extensively in my studio, it’s been mostly as a controller for MPE-enabled soft synths like Arturia’s Pigments and Ableton’s Drift. It’s undeniably one of the most powerful MIDI controllers on the market. My one major complaint on that front being that its incredible arpeggiator isn’t available in controller mode.

The Osmose is a gorgeous instrument that, in the right hands, is capable of delivering nuanced performances unlike anything else. Even if, at times, the borrowed sound engine doesn’t live up to the keyboard’s lofty potential.

This article originally appeared on Engadget at https://www.engadget.com/expressive-e-osmose-review-a-game-changing-mpe-keyboard-but-a-frustrating-synthesizer-170001300.html?src=rss
Terrence O'Brien

Google's latest Android update includes AI-created image descriptions and animations for voice messages

3 days ago

Google is rolling out a trio of system updates to Android, Wear OS and Google TV devices. Each brings new features to associated gadgets. Android devices, like smartphones, are getting updated Emoji Kitchen sticker combinations. You can remix emojis and share with friends as stickers via Gboard.

Google Messages for Android is getting a nifty little refresh. There’s a new beta feature that lets users add a unique background and an animated emoji to voice messages. Google’s calling the software Voice Moods and says it’ll help users better express how they’re “feeling in the moment.” Nothing conveys emotion more than a properly-positioned emoji. There are also new reactions for messages that go far beyond simple thumbs ups, with some taking up the entire screen. In addition, you’ll be able to change chat bubble colors.

The company’s also adding an interesting tool that provides AI-generated image descriptions for those with low-vision. The TalkBack feature will read aloud a description of any image, whether sourced from the internet or a photo that you took. Google’s even adding new languages to its Live Caption feature, enhancing the pre-existing ability to take phone calls without needing to hear the speaker. Better accessibility is always a good thing.

Wear OS is getting a bunch of little updates. You can control more smart home devices and light groups directly from a watch, which comes in handy when creating mood lighting. You can also tell your smart home devices that you are home or away with a tap. There’s a new Assistant Routines feature that automates daily tasks and an Assistant At a Glance shortcut on the watch face that displays information relevant to your day, like the weather and traffic data.

As for Google TV, there are ten new free channels to choose from, bringing the grand total to well over 800. None of these channels require an additional subscription, but they will have commercials. All of these updates begin rolling out today, but it could be a few weeks before they hit everyone’s inbox.

This article originally appeared on Engadget at https://www.engadget.com/googles-latest-android-update-includes-ai-created-image-descriptions-and-animations-for-voice-messages-172522129.html?src=rss
Lawrence Bonk

Google Messages now lets you choose your own chat bubble colors

3 days ago

Google is rolling out a string of updates for the Messages app, including the ability to customize the colors of the text bubbles and backgrounds. So, if you really want to, you can have blue bubbles in your Android messaging app. You can have a different color for each chat, which could help prevent you from accidentally leaking a secret to family or friends.

With the help of on-device Google AI (meaning you'll likely need a recent Pixel device to use this feature), you can transform photos into reactions with Photomoji. All you need to do is pick a photo, decide which object (or person or animal) you'd like to turn into a Photomoji and hit the send button. These reactions will be saved for later use, and friends in the chat can use any Photomoji you send them as well.

The new Voice Moods feature allows you to apply one of nine different vibes to a voice message, by showing visual effects such as heart-eye emoji, fireballs (for when you're furious) and a party popper. Google says it has also upgraded the quality of voice messages by bumping up the bitrate and sampling rate.

In addition, there are more than 15 Screen Effects you can trigger by typing things like "It's snowing" or "I love you." These will make "your screen erupt in a symphony of colors and motion," Google says. Elsewhere, Messages will display animated effects when certain reactions and emoji are used.


On top of all of that, users will now be able to set up a profile that appends their name and photo to their phone number to help them have more control over how they appear across Google services. The company says this feature could help when it comes to receiving messages from a phone number that isn't in your group chats. It could help you know the identity of everyone in a group chat too.

Some of these features will be available in beta starting today in the latest version of Google Messages. Google notes that some feature availability will depend on market and device.

Google is rolling out these updates alongside the news that more than a billion people now use Google Messages with RCS enabled every month. RCS (Rich Communication Services) is a more feature-filled and secure format of messaging than SMS and MMS. It supports features such as read receipts, typing indicators, group chats and high-res media. Google also offers end-to-end encryption for one-on-one and group conversations via RCS.

For years, Google had been trying to get Apple to adopt RCS for improved interoperability between Android and iOS. Apple refused, perhaps because iMessage (and its blue bubbles) have long been a status symbol for its users. However, likely to ensure Apple falls in line with European Union regulations, Apple has relented. The company recently said it would start supporting RCS in 2024.

This article originally appeared on Engadget at https://www.engadget.com/google-messages-now-lets-you-choose-your-own-chat-bubble-colors-170042264.html?src=rss
Kris Holt
1 hour 1 minute ago
Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics
Subscribe to Engadget feed