ISPs and AI

One of the most common questions I’ve been asked lately is what I think the impact AI will have on the broadband industry.

All of the big ISPs in the industry have actively been pursuing the use of AI. For example, AT&T Labs says it is investigating the use of AI to optimize the customer experience and auto-heal the network. Comcast says that it is using AI to help process petabytes of data every day. Comcast also worked with Broadcom to develop the first broadband chip for nodes, amps, and modems that bring AI into the network. Verizon is working on an AI solution to improve the customer experience in its IVR systems for customers calling the company. Charter is working AI into its customer interface. It’s also using AI to help customers generate commercials for advertising on the cable network.

Before talking about those uses, a basic primer on AI is needed. Most people are familiar with public AI platforms like Chat GPT or the Google Cloud Platform. No big corporations are using the open public versions of AI. Any data dumped into those systems is available to other users. Instead, corporations are buying and implementing private versions of AI that they train using their own data. One of the common issues with public AI platforms is that AI will hallucinate and invent an answer to a question. However, hallucination can be controlled in private networks where the user strictly controls the data.

All of the big ISPs, and seemingly most companies that field a lot of calls from customers, want to use AI to improve the customer experience. There are different approaches to using AI. One of the primary uses of AI is to eliminate customer menus where customers are asked to wade through a menu to choose who they want to talk to. AI can be used to interpret a customer request and direct the call to the appropriate place. AI can also be used to quickly pull all information about a caller to put it at the fingertips of a customer service rep. Maybe the most important feature of AI is that a customer conversation can carry across different customer service reps, meaning that a caller doesn’t have to repeat basic information every time they are transferred.

There are companies in the country that have completely automated AI to fully handle the customer interface, but it’s not likely that any big ISPs have gotten that bold yet. All of the feedback I’ve heard is that it’s still far too easy for an AI system to badly misinterpret what a customer wants. The same goes for attempts to fully automate an online chatbot. It doesn’t seem like anybody has come close to perfecting this yet, and doing it clumsily is frustrating for customers. But who knows, maybe in the future, most customer interfaces could be entirely handled by an AI representative.

Big ISPs are all investigating the use of AI in the network. The most obvious uses of AI is to interpret real-time network data to detect problems and analyze network quality. For many years, networks have used alarms to identify problems. One of the issues with an alarm system is that ISPs get constantly hit with minor alarms, and it’s not always easy to pick out the ones that matter. One of the hopes with AI is to look deeper at the performance of network equipment to identify problems long before an alarm is triggered.

ISPs are also starting to use AI for load balancing. It’s easy to think of broadband usage on a network as a steady state, but the reality is that usage spikes and dives erratically from second to second. AI can be used to examine usage on all segments of a network. For example, there are numerous paths from the network core in a fiber or cable network, and AI can examine all of them in real-time, as well as understand how usage spikes from neighborhoods can overwhelm other parts of the network.

The big temptation is to let AI take an active role in fixing problems. That idea makes a lot of network engineers nervous because AI is still nothing more than a series of algorithms created by programmers. It’s incredibly challenging for any programmer to create perfect programming, and the fear is having a network get out of control in a way that humans will have a hard time regaining control without shutting the network down. It’s not hard to envision an automated AI repeatedly magnifying and compounding a network problem.

The last use of AI by ISPs is to automate functions done by people. None of the big ISPs are talking about this because doing so sparks a lot of anxiety in the workforce. AI seems to be efficient at processing repetitive data or generating routine reports for management. It’s becoming obvious that other industries like banking and insurance companies have already been able to reduce some staff due to AI efficiencies. It’s likely that ISPs are already quietly reducing some clerical and middle-management staff due to AI. This is the part of AI that makes workers nervous. AI is more likely to replace white-collar workers and middle management than hands-on technicians. But this is going to be done quietly, at least until one of the big ISP CEO spills the beans on an investor call.

It’s going to be a while until any of these benefits move downhill to smaller companies. AI hardware and software is prohibitively expensive and smaller ISPs will have to wait until there are generic solutions offered by AI vendors.

Growing Broadband Demand

I wrote a recent sequence of blogs that look at the increasing demand for broadband usage. In today’s blog I’m going to look at some concrete examples of situations where broadband demand has expanded a lot faster than expected.

The first example is in schools. Ten years ago, there was a scramble to get gigabit broadband access to schools. Because of the use of the FCC’s E-rate money, a lot of schools across the country got connected to fiber and were able to buy faster broadband. The original goal was to get a gigabit connection to each school, and I remember a few years ago seeing a report that almost every school in many states met that goal. More recently, the FCC created an updated goal that schools ought to have access to at least 1 Mbps of simultaneous capacity for each student. Connected Nation published a report for 2023 saying that 74% of school districts in the country meet or exceed that goal – an increase of 57.4% since 2020.

My consulting firm interviews a lot of schools every year, and we’re hearing that the 1 Mbps goal is no longer adequate. Just recently, we heard from a school that meets that target but still can’t have all students take mandatory state performance tests on the same day. The school still has to ration broadband to make sure that too many classrooms aren’t working online at the same time. I’ve talked to schools that have established a goal of 3 – 5 Mbps per student to accommodate the way that teachers and students really use broadband.

Another example of fast growing demand is ISP backhaul. These are the broadband connections that connect local networks to the Internet. I work with a lot of small ISPs. I can remember helping folks find backbone connections a decade ago, and a typical small ISP might have purchased a connection with an overall 10-gigabit capacity but only provisioned a few gigabits of capacity on the connection. Many were amazed at the 10 gigabit capacity when they first ordered it since it felt so oversized. They assumed that the connection capacity was going to be good for many years as they added a gigabit or two once in a while.

These ISPs turned out to be wrong, and broadband demand grew to swamp the 10-gigabit connections a lot sooner than expected. It’s not hard to understand why. OpenVault has been reporting on the overall average usage of nationwide customers. According to the OpenVault data, the average broadband consumption for homes and businesses has more than tripled just since the end of 2017. At the end of 2023, the average consumption was 641 gigabytes per customer – a number that ISPs would not have believed a decade ago. However, the size of backbone connections are not based on overall broadband consumption but on the busy hour consumption – the time when an ISP’s network is the busiest. Many small ISPs have told me that busy hour traffic has grown even faster than the average consumption reported by OpenVault.

The final example of broadband demand inflation is the broadband speeds being subscribed by homes and businesses. OpenVault also reports on the subscribed speeds that people are buying nationwide, and reported the following statistics for the end of the years between 2019 and 2023.

This chart shows a rapid migration of households now buying faster broadband connections. Some of the increases came when cable companies unilaterally increased customer speeds. Since 2019, many cable companies increased 100 Mbps subscriptions to 200 or 300 Mbps. The chart also shows a big migration of customers now buying gigabit broadband. There is nobody in the country who predicted in 2019 that in four short years there would be an 11-fold increase in households subscribed to gigabit speeds. Most of these gigabit customers are paying a premium price to get the faster speeds.

There is proof of increasing broadband demand almost everywhere I look. I often talk to businesses that have upgraded to faster speeds only to find out that within a few years they need even more speed. I hear from farmers, photographers, newspapers, and others who send and receive gigantic data files that they are having a problem buying a broadband product that meets their needs.

A Troubling Decision on Rates

The 2nd U.S. Circuit Court of Appeals in Manhattan ruled recently that federal telecommunications law does not stop states from regulating broadband rates. This was in relation to a 2018 law passed by the State of New York that required ISPs to offer low-income rate plans for as low as $15 per month.

ISPs appealed the new law, and a U.S. District Court issued an injunction against the law. The recent ruling overturned that injunction and puts the law back into effect. The law allows ISPs with less than 20,000 customers to appeal the the implementation of lower rates, but there is no guarantee that ISPs will be relieved from the law.

I’ve written many times about the negative impact of forcing low rates onto ISPs. ISPs with a lot of low-income customers could quickly find their revenues streams decimated. Any legislator or regulator that makes this kind of rule must obviously think that ISPs sit atop gigantic margins – but many do not. An ISP with a lot of low-income customers will almost certainly have to increase other rates to offset the forced low rates, a move that would likely will put them at a competitive disadvantage.

This court ruling comes at an interesting time. The FCC just passed new rules that put Title II regulation and net neutrality back into play. One of the interesting provisions of the new rules is that the FCC purposefully decided to forebear the right to regulate broadband rates, meaning the FCC didn’t invoke the portions of Title II regulations that give the agency the ability to regulate rates.

The Court’s ruling was made under the assumption that ISPs are regulated under Title I rules – which are the rules that have been in place since the Ajit Pai FCC killed net neutrality. But suddenly, we are back in a Title II regulatory environment. The Court ruled that the FCC has no power to preempt State regulation under Title I rules, but that the FCC would have that right under Title II regulation. This means the Court believes the FCC could now preempt the State law since the agency just reinstated Title II regulations.

The court ruling creates several dilemmas for ISPs. The easiest path for ISPs to fight the reinstated New York law is to embrace Title II regulation and ask the FCC to preempt New York. That’s not something that big ISPs want to do – they have spent years and a lot of political capital vilifying Title II regulation. Everybody is expecting big ISPs to quickly appeal the recent FCC order that reinstates Title II regulation. If ISPs are successful in getting a Court to put Title II rules on hold, then the New York low-rate regulations will go into effect without recourse from the FCC.

If the big nationwide ISPs decide to try to kill Title II regulation, they will be throwing New York ISPs under the bus. But that’s not the end of the story. If the New York law goes into effect, it seems likely that other states will pass similar legislation. Many states are unhappy to see the ACP low-income subsidy die. But very few States are interested in using general funds to fund a new low-income subsidy program, so it’s going to be tempting to force ISPs to cover the discounts. If ISPs decide to fight against the FCC’s Title II rules, they might find themselves fighting against having to cut rates in dozens of states.

The Court ruling also creates a dilemma at the FCC. Even if the FCC has the right to tell New York that it can’t regulate rates – will it do so? The FCC recently made it clear that it did not want to try to absorb the dying ACP plan into the Universal Service Fund. But that doesn’t mean that the FCC will willingly play the bad guy and tell States they can’t tackle some state version of ACP relief.

It is nearly impossible to predict how the FCC will react. The FCC will certainly be happy to see States tackle the low-income problem since that takes the FCC off the hot seat. But the FCC would be happier with state plans that mimics ACP, where a State would fund the subsidy. The FCC will not like the precedent of states telling the ISPs they must cut rates. Which will the FCC dislike more – telling states they can’t cut rates or letting states exercise rate regulation?

Every ISP ought to be concerned about this ruling. However, there is no way to guess how the big ISPs and the FCC will react to the Court order. It’s unusual to encounter a regulatory ruling that is as challenging as this one for both ISPs and the FCC.

Unpacking the Net Neutrality Order

Today’s blog provides a short summary of the FCC’s new Order that reinstates Title II authority and net neutrality. It’s a monster order of 434 pages and 2,921 comments.

Following are my key takeaways from the Order:

  • A large part of the Order reinstates nearly the same Title II rules back that were vacated by the Ajit Pai FCC that killed Title II authority.
  • For those of you who need a new acronym, the Order refers to broadband as BIAS (broadband Internet Access Service). ISPs are now BIAS providers, an unfortunate acronym.
  • The Order reinstated Title II authority over BIAS services – meaning broadband is considered to be a telecommunications service, not an information service.
  • The FCC granted itself new and expanded authority to defend national security. It notes that it has taken actions related to national security in recent years that would have been stronger if based on the new authority described in this Order.
  • The FCC also described its role in addressing cybersecurity issues.
  • The FCC says Title II authority gives it more tools to deal with network resiliency and reliability related to natural disasters or malicious interference. The Order gives it authority to make ISPs participate in the Mandatory Disaster Response Initiative (DIRS).
  • The Order reinstates privacy and data security rules under Section 222 rules that have only been applied to voice services.
  • The FCC thinks the Order gives it the authority to develop rules that apply to ISPs that serve multi-dwelling units – a topic being explored in a different FCC proceeding.
  • The FCC says the Order extends opportunities to ISPs who provide broadband-only and no other services that are under FCC jurisdiction. That should help such ISPs for issues like attaching to poles. It also allows such ISPs to participate in Universal Service Fund support plans.
  • The FCC thinks the order gives it the authority to require ISPs to provide better access for people with disabilities.
  • The Order clarifies that specialized services at the networks edge are not considered to be broadband, with examples like the networks built inside a large enterprise. Broadband edge services provided by premise operators, like broadband at coffee shops, universities, bookstores, and libraries are also not regulated. Content delivery services, VPNs, web hosting, and data storage services are also not regulated.
  • The FCC says that services like peering, traffic exchange, and interconnection fall under Title II authority.
  • The FCC took on expanded authority to preempt States that want to regulate broadband. For now, the FCC is not preempting the California net neutrality rules.
  • The FCC specifically decided not to expand contributions to the Universal Service Fund to include broadband. There has been a lot of lobbying to have the FCC pick up the expiring ACP program, and this shut that door.
  • The FCC went out of its way to say that it is not going to engage in rate regulation. This is the big bogeyman that giant ISPs have said would come with regulation – and for now, the FCC is not invoking any authority over rates, but admits that it has the authority to do so.
  • Of course, the Order is adopting all of the rules referred to as net neutrality. These are the rules that prohibit ISPs from blocking or throttling traffic or engaging in paid prioritization. This is not the main thrust of the Order and didn’t get discussed until page 264.
  • The FCC is reinstating the transparency rules for ISPs that were first put into place in 2015. Under these rules, ISPs must publicly disclose accurate information to customers involving network management practices, network performance, prices, and other information that customers rely on to buy broadband. The transparency requirements go significantly beyond what is required for the broadband labels. For now, these rules will only apply to ISPs with more than 100,000 customers.
  • The Order reinstates both the informal and formal complaint process where consumers can lodge complaints against ISP practices, and ISPs can ask the FCC to intervene in carrier disputes.
  • The Order reminds ISPs that it has the ability to enforce broadband regulations using fines or other tools at its disposal.

Copyrights and ISPS

There is a long-running legal case that could have dire consequences on broadband households. The case started in 2018 when a group of major record labels sued Cox Communications over its policies related to copyrights. The labels accused Cox of refusing to disconnect customers who repeatedly broke copyright rules by downloading music without paying for it.

In 2019, a court in Virginia found Cox liable for both contributory and vicarious copyright infringement and awarded the music labels an astounding $1 billion in damages. Cox appealed, and the Fourth Circuit U.S. Court of Appeals reversed the charges for vicarious infringement and vacated the $1 billion of damages. There will be a new trial to reassess the size of the damage award.

The troubling part of the legal ruling is that, even after appeal, Cox still stands in violation of contributory damages over actions taken by its customers. That’s a ruling that should concern every ISP – and every Internet user.

The record labels are insisting Cox should have permanently disconnected any customer who engaged in repeated copyright infringement. This ruling turns ISPs into Internet policemen who must monitor and punish customers who engage in copyright infringement. That doesn’t just mean people who download copies of music but also movies, games, and books. It means that in order to avoid having to pay big damages, ISPs might cut off customers for watching a pirated sporting event.

This is an incredibly uncomfortable role for ISPs. ISPs are not going to monitor everything their customers do, but will instead react to complaints made by copyright holders. Complaints are rarely made directly by those holding copyrights, and there is an entire industry of companies that make a living by issuing take-down requests for infringements of copyrighted materials. Social media companies are inundated with these take-down requests every day to remove posts that link to copyrighted music, movies, and other materials. The music companies expect ISPs to cut off subscribers after only a few violations of copyright. ISPs are in the business of selling broadband connections, and the last thing they want to do is to disconnect paying customers.

This could be devastating for broadband customers. Most homes in the U.S. don’t feel that they have broadband choice and only have access to one fast ISP. If they lose that connection, they could find themselves cut off from functional broadband.

It’s easy to believe that customers who get cut off for such violations deserve it. But the process is completely one-sided and there is no appeal for a broadband customer that they were unjustly accused of bad behavior. They can be cut off without appeal or recourse. Any home with teenagers will have to worry that their teens don’t download copies of games, movies, or music. People could hit a link on social media that downloads copyrighted material without even realizing they did something wrong. Such downloads could be done on a cellphone that is using a home’s WiFi – and the bad behavior doesn’t even have to be done by a family member.

This is a case where the punishment does not fit the crime. Rather than directly pursuing people who download pirated copyrighted material through a legal process, copyright holders want ISPs to act as judge, jury, and executioner and unilaterally punish customers by taking away their Internet access.

There are numerous surveys since the pandemic that show that a large majority of people now consider a broadband connection to be essential. All of the surveys my consulting business has done in the last year show that half or more of homes now have somebody working from home using broadband at least part-time every week, and we routinely find 10% to 15% of homes with somebody working at home full time. We now use broadband for a wide variety of essential activities such as shopping, banking, hunting for a job, and connecting with a doctor.

While the courts vacated the billion-dollar penalty against Cox, ISPs are all going to notice if the Appeal Courts still imposes a sizable penalty during the rehearing of the damages. Losing an essential broadband connection because teens, roommates, or visitors violated copyright laws seems like an extreme penalty. If ISPs start cutting customers dead for violating copyrights, I have to imagine that people are going to be a lot more cautious against giving visitors or even family members the WiFi password.

Like many other problems in the industry, the only real fix for this is to have Congress update or replace the Digital Millennium Copyright Act (DMCA), which was adopted in the 1990s when we were all still using dial-up access.

The State of the Internet – 2024

It’s been a while since I took a look at the worldwide Internet. The statistics cited below come from Datareportal.

The world population in January 2024 was 8.08 billion, up 74 million from a year earlier, a growth rate of 0.9%.

There were 5.61 billion unique mobile subscribers in January, up 138 million (2.5%) over a year earlier.

5.35 billion people used the Internet at the end of 2023, up 97 million (1.8%) from a year earlier. This means almost two-thirds of people on the planet are connecting to the web. Some interesting statistics about worldwide Internet connectivity:

  • 63.5% of females are connecting to the Internet, and 68.8% of males.
  • 61.8% of worldwide Internet access comes from laptops and desktops
  • 78.8% of urban residents worldwide use the Internet versus 48.9% of rural people.

The least connected nations: North Korea at 99.9%. Between 80% and 90% – Central Africa Republic, Burundi, South Sudan, Niger, Yemen, Afghanistan, Ethiopia, Burkina Faso, Madagascar

The countries that still have the most unconnected populations:

  • India 684 million
  • China 336 million
  • Pakistan 132 million
  • Nigeria 103 million
  • Bangladesh 96 million
  • Indonesia 76 million
  • Tanzania 47 million
  • Uganda 36 million

The average time spent online worldwide is 6 hours 40 minutes per day, up 3 minutes from 2023. That means the world spends a combined 780 trillion minutes using the Internet in a year.

The countries with the most average daily usage:

  • 9+ hours – South Africa, Brazil
  • 8 to 9 hours – Philippines, Columbia, Argentina, Chile, Russia, Malaysia, U.A.E.
  • U.S. is at 7 hours 3 minutes. 20th in the world.

Younger people worldwide spend more time online. The age group 16-24 spends 7 hours 15 minutes online daily, while those 55-64 spend 5 hours 15 minutes.

5.04 billion people use social media, up 266 million (5.6%) from a year earlier. There are 8.4 new social media users connected per second.

People are also spending more time on social media. The average TikTok viewer spends 34 hours per month on the site. Other sites with the most usage include YouTube (28 hours), Facebook (19.8 hours), Whatsapp Messenger (17 hours), Instagram (15.8 hours), Line (8.2 hours), X (4.6 hours), Telegram (3.8 hours), Snapchat (3.5 hours), FB Messenger (3.3 hours), Pinterest (1.7 hours), and LinkedIn (0.9 hours).

Countries with the biggest percentage of social media users.

  • Over 90% – U.A.E, Saudi Arabia, South Korea
  • 85% – 90% – Hong Kong, Singapore, Netherlands
  • 80% – 85% – Spain, Malaysia, U.K., Canada, Norway, Austria, Germany, Sweden
  • U.S. is at 70.1% – 36th place.

The most popular uses of the Internet (percentage that use each function)

  • 1 – Chat and Messaging 94.7%
  • 2 – Social Media 94.3%
  • 3 – Search 80.7 %
  • 4 – Shopping 74.3 %
  • 5 – Location services / Maps 54.4%
  • 6 – Email 49.5%
  • 7 – Music 48.1%
  • 8 – Weather 42.2%
  • 9 – Entertainment 40.6%
  • 10 – News 40.3%

Average time worldwide spent with various Internet/Media per day:

  • Using the Internet 6 hours 40 minutes
  • Watching Video (Online and TV) 3 hours 6 minutes
  • Using Social Media 2 hours 23 minutes
  • Reading Press 1 hour 41 minutes
  • Streaming Music 1 hour 25 minutes
  • Using a Game Console 1 hour 2 minutes
  • Listening to Broadcast Radio 50 minutes
  • Listening to Podcasts 49 minutes

Online ads now represent 70% of all advertising dollars. Worldwide, $1.03 trillion was spent for online ads in 2023, up $70 billion over the previous year. $719.2 billion of that spending was done on digital search sites and social media.

How the Pandemic Changed Broadband

The Washington Post recently published an article with a series of graphs that shows the impact of the pandemic on a number of economic indicators that range from unemployment, wages, air travel, grocery prices, home prices, and consumer sentiment.

The article got me thinking about the impact of the pandemic on the broadband industry – and there are several important changes that came out of our collective pandemic experience.

Upload Speeds. Probably the biggest change for the industry was that many millions of people suddenly cared about upload speeds as people tried to work from home and students tried to attend class from home. There have always been people who complained about the ability to join a Zoom call, but before the pandemic, ISPs largely ignored them.

The pandemic turned the lack of upload speeds into a crisis. It turns out that upload speeds weren’t just a problem for slow technologies like DSL and hotspots. Cable companies suddenly had a lot of irate customers who were furious that they couldn’t maintain upload connections from home. Cable companies had put a lot of effort over the previous decade into staying ahead of download speed demand. Before customers began complaining about download speeds, cable companies had regularly made unilateral upgrades to download speeds. Every few years, customers would wake up to suddenly faster speeds, and surveys showed that most cable broadband customers were happy with download speeds from cable companies.

But the pandemic suddenly meant that cable technology was seen as inadequate. It was the collective experience of customers during the pandemic that led to the public becoming convinced that fiber is a better technology and that their cable company was behind the times. This prompted the cable companies to scramble to find a faster upload solution, and we’re just now seeing them implement faster upload speeds four years after the start of the pandemic. Only time will tell if current upload speed upgrades will be good enough to turn around the public sentiment that now favors fiber over coax.

Working at Home. The pandemic sent huge number of people home to work, and many of them have never gone back to the office. My consulting firm does surveys, and before the pandemic we rarely saw more than 10% of homes that had somebody working from home even part time. Today, we routinely find communities where 15% or more of homes have somebody working at home full time, and 50% of home have somebody working from home part time.

The main impact for ISPs of having customers working from home is that it created a lot of customers who are intolerant of broadband outages. People who work from home typically lose the ability to work during the outage, and ISPs get instant feedback about outages through complaints and negative online reviews. Our surveys show that intolerance from outages has climbed significantly since before the pandemic. Many customers believe broadband should always work.

Outrage over Lack of Rural Broadband. I’ve been working with rural communities that have been yelling for more than a decade about the problems caused by poor broadband. The pandemic brought this issue to national attention when employers and schools in cities and county seats couldn’t send people home for school or work. There was so much press about the issue that I think this was the first time that a lot of urban and suburban people realized that rural folks don’t have the same broadband.

I firmly believe that the outcry about the impact of the pandemic is what got the BEAD grants put into the IIJA legislation at such a high level of funding. Before the pandemic, the federal government and states would throw a billion dollars or so each year at fixing rural broadband – I used to call this the hundred-year plan to solve rural broadband. It took the pandemic to get bigger dollars thrown at the rural broadband gap. I don’t know if anybody has added up all of the funding, but between state, federal, and local grants, we must be spending nearly $100 billion for new rural broadband networks.

Subsidizing Rural Broadband Networks

We are preparing to award over $44 billion to construct rural broadband networks. Almost by definition, these networks will be built in rural areas where it’s hard to justify a business plan where revenues generated from the grant areas are sufficient to fund the ongoing operation and eventual upgrades to any broadband networks.

The FCC has addressed this issue in the past, and numerous FCC programs have provided ongoing subsidies for rural broadband networks. The FCC has been very careful over the past decades to create separate subsidies for small telephones and cooperatives versus the largest telephone companies. The reasons for the distinction had to do with economy of scale. A higher level of subsidy has been provided to smaller telcos since it was reasoned that small rural companies have a hard time staying afloat without a subsidy.

Conversely, the historic reasoning of regulators was that large telcos didn’t need as much subsidy, or even any subsidy since the big companies also operated in county seats and large cities. Historic regulation assumed that the profits generated in urban and suburban areas could be used to subsidize rural areas.

The original subsidy to small telcos came from the Universal Service Fund (USF). Not every small telco received a subsidy, and the amount of any FCC subsidy was calculated according to the cost structure of each small telco. Small companies would annually calculate costs, and the amount of subsidy benefited the rural companies with the highest costs.

The FCC adopted a major change to the rural subsidy program in 2014 with the USF/ICC Transformation Order. This made the compensation for small telcos more complicated and created different subsidies for different kinds of costs – but the subsidies still benefited the highest-cost small telcos. The subsidy program for small telcos eventually morphed to include the A-CAM program.

Before the USF/ICC Order, only a small portion of big telephone company areas were eligible for any USF subsidy. The ICC Order was a huge win for big telcos, and subsequent to the Order, the FCC created the CAF II subsidy for the most rural locations served by the big telcos. Suddenly, many billions of dollars of subsidy flowed to big telcos to upgrade rural DSL speeds to at least 10/1 Mbps.

In recent years, the FCC opened subsidy programs to a wider range of carriers than just incumbent telephone companies. Both the CAF II reverse auction and the Rural Development Opportunity Fund (RDOF) were conducted using a reverse auction and were available to any ISP. Some of these funds went to telcos, but also went to cable companies, fixed wireless ISPs, and new start-up fiber overbuilders.

This history raises an interesting question. The BEAD grants are not a subsidy program. As a grant program, practically every dollar spent with BEAD funds must be used to build broadband infrastructure – with only some minor reimbursements allowed to cover the cost of complying with the grant paperwork. The BEAD money does not cover any operating expenses for the rural networks that will be built.

In a post-BEAD world, there will be a reshuffled mix of rural broadband networks – properties still operated by small telcos, properties that are still receiving CAF II or RDOF subsidies, and areas built with BEAD or ARPA grants that will not be receiving any subsidies. Some of the BEAD properties will be operated by giant telcos and cable companies, while others will be operated by a wide range of smaller ISPs. The FCC will have created a real mess in rural America, with adjoining areas receiving drastically different levels of federal support – even when the local cost characteristics are identical.

I find it inevitable that companies that win BEAD will start lobbying for operating subsidies within a few years of networks being constructed. The FCC will be faced with the challenge of coming up with a sustainable subsidy program for all rural broadband networks. I think the FCC has several possible paths to take in the post-BEAD world:

The FCC could continue with existing subsidy programs with no acknowledgement that there is a wide disparity between areas that get and don’t get subsidies. The FCC could randomly decide on new subsidy programs to support subsets of companies – perhaps a subsidy program for BEAD winners and another for RDOF properties.

Or the FCC could start all over and design a subsidy program for the post-BEAD world. The best subsidy program would be cost-based, like the original USF. The original cost-based USF looked at the company-wide costs of each ISP, not at the costs to operate in rural areas. Under a cost-based system, small rural companies would likely get the most subsidy per subscriber while large ISPs that operate urban networks would likely get nothing and would be expected to support rural properties with urban profits.

Another option would be that the same amount of subsidy goes to support every rural subscriber, regardless of who owns the ISP business.

There has been a tickle in the back of my brain for the last year wondering why companies like AT&T, Charter, and Comcast seem to be willing to pursue grants for rural areas where it will be a challenge for revenues to fully cover costs. The big telcos have been working feverishly to ditch copper networks, and it’s hard to understand why they are now willing to go back into rural areas that have low density and long drive times.

But it recently struck me – these big companies are betting on the FCC creating a future subsidy program for areas being built with the current flood of ARPA and BEAD grants. I can’t see any other way to justify some of the grants I’ve seen the big companies accept. My bet is that we’ll barely make it through the BEAD grant awards before the big company start lobbying for new subsidy programs that benefit them more than other rural ISPs.

Technology Shorts April 2024

Scientists continue finding ways to make computers faster and better. Today’s blog talks about three interesting developments in computing technology.

Universal Computer Memory. The holy grail of computing memory has been universal computer memory that can replace the current need for both short-term and long-term memory in the computing process. An article published in Nature Communications describes a new material that looks like it will enable universal computer memory.

Computers currently use RAM for short-term memory. RAM chips are superfast but need a lot of physical space and use a lot of power. A big downside to RAM is that everything is lost if a computer loses power. Long-term memory is achieved using flash memory, which is much slower than RAM but can retain data without power. Universal computer memory would capture the best of both worlds by being fast, energy-efficient, and retaining data without power.

Scientists at Stanford and other universities are using a new material called GST467 that contains germanium, antimony, and terbium. Scientist are configuring GST467 in a stacked-layer structure known as a superlattice. They believe this will create chips that are faster, less expensive to manufacture, and that will use less power. The team tested hundreds of different chip sizes and configurations using the new material. They found that a GST467 memory device achieved fast speeds while consuming very little power. They also believe the material can retain data for more than ten years at temperatures above 248 degrees Fahrenheit. These are all huge performance improvements over current chips.

First Graphene Semiconductor.

Graphene is made from a single layer of carbon atoms bound in a tight hexagonal lattice. It seems like a superior material to use for electronics since it’s a better conductor than silicon. Scientists have always known that graphene also has an unusual property where electrons passing through it can be structured in a wave-like pattern that is perfect for quantum computing.

Researchers have never been able to overcome the issue of creating a band gap, which is the ability to easily move electrons where needed inside a graphene chip. A band gap is what enables components like a transistor to turn on and turn off.

A reported in Nature, scientists at the Georgia Institute of Technology, and from China, have created the first working graphene-based semiconductor. The chip is made from epitaxial graphene, which is a specific crystalline form of graphene bonded to silicon carbide. They’ve found that transistors in this structure can operate at terahertz frequencies, which is ten times faster than today’s silicon-based chips. The best news is that it looks like this new structure could be integrated into the current processes for chip manufacturing.

Protonic Artificial Synapse. Engineers at MIT have developed an artificial synapse that mimics the way the brain works, but that can move data a million times faster than the human brain. The human brain is by far the most powerful data processor due to the unique structure of neutrons and synapses, and scientists and engineers have been trying for years to duplicate the brain using electronic neural networks.

The MIT team has mimicked neural networks by creating a chip that works more like the brain. It uses an analog system that shuttles data using protons instead of electrons. The chip uses a solid electrolyte made from phosphosilicate glass (PSG) that allows the creation of a programmable resistor that will work at room temperatures.

When a strong electric field of up to 10 volts is applied to the device, the protons move very quickly, which is what allows the chip to be up to a million times faster than a brain. They’ve found that the chip seems to have a long-life and doesn’t break down from the increased power. The big challenge is to find a way to mass-produce the chips then arrange them into the most effective array.

The Battle for Network Monitoring

An interesting battle is underway to capture the market for monitoring devices. The latest entry into the market is 5G RedCap. This is a technology that is currently under development in chipsets and ought to hit the market in 2025 and 2026.

RedCap is the latest attempt by cellular carriers to monetize 5G. RedCap was defined in the 5G specification 3GPP Release 17. The technology allows for 5G devices that are less complex, less costly, and more power efficient than conventional 5G devices like smartphones. RedCap will compete for monitoring devices like sensors that send small packets of information continuously and require a long battery life. This will include devices like industrial wireless sensors, health wearables, and surveillance devices. Traditional 5G is not good for such devices because 5G chips add too much cost and use too much power.

5G RedCap devices will use fewer antennas and will support less bandwidth than a typical 5G connection. Fewer antennas, lower bandwidths, and different modes of operation will help to reduce power consumption. RedCap devices can transmit data without having to connect to a network – the RedCap device can transmit its bits and hope a network is receiving it.

Cellular carriers will be working on ways to monetize the new capability – perhaps by selling monthly subscriptions for all of the devices at a given site.

This contrasts with the other technologies used to monitor devices. For devices inside or near buildings, the monitoring technology of choice is free wireless connectivity using WiFi or Bluetooth. Devices can be monitored with these technologies without paying an additional fee. However, WiFi devices can still require more power than is being envisioned by RedCap. Most WiFi monitoring devices have to periodically be recharged, which is not always practical for a small device like a sensor that alerts the network if somebody walks through a hallway.

WiFi is not a good solution for monitoring outdoor devices that are not located very close to a WiFi network. WiFi also isn’t a good solution for something like a portable health wearable unless the user always carries a cellphone – an impractical requirement.

The other interesting player in the market is Amazon. The company launched its Sidewalk network in 2021. Amazon has created a local network that is established between Amazon devices in the home and neighborhood. The network uses a combination of Bluetooth and 900 MHz LoRa signals. This network can communicate with Amazon from devices inside a home, and the LoRa spectrum can pick up devices outside and in neighboring homes. Amazon says that it already covers over 90% of homes and wants to move that to over 95%.

A fourth technology in use today is using satellites to monitor remote devices. However, the electronics for such devices are neither low-cost nor low-power.

In looking at the various technologies, it’s clear that each technology will find a niche. RedCap seems best aimed at the mobility market. The technology will make it easier to sell wearable technology and anything else that is not stationary. RedCap might always fit into industrial situations where an operator is attracted to large numbers of low-power and low-cost sensors. But RedCap will come with a price, so when you buy a wearable device, be prepared for a monthly 5G fee.