Right now the network is more sensitive to latency rather than bandwidth - the longer the latency that a block propagates to the network, the network will experience a higher block stale rate.
However, my understanding is that the network bandwidth of a node is actually underutilized - the P2P bandwidth is only used when broadcasting a tx or a block, where the time of broadcasting a block is random, and the resulting traffic is sporadic.
Consider a full node that runs all shards and maintains 10,000 TPS and the bandwidth is fully utilized, the expected bandwidth required (with 30 connected peers) will be
100 (bytes per tx) * 30 (peers) * 10,000 (TPS) = 228.88 Mbps
which I think it is acceptable if running a machine in a cloud with more than 1Gbps or even 10Gbps bandwidth.
Besides bandwidth, other facts of limiting scalability can be storage and processing power, where our solution is to run a full node as a cluster that processes/stores part of the ledger thanks to sharding. Currently, our code already supports that and our community member can easily produce 100,000 TPS in a test environment (3 full nodes).
In fact, when discussing scalability, Satoshi already provisioned a similar way to address the problem, where I quote here
Satoshi Nakamoto Sun, 02 Nov 2008 17:56:27 -0800: Long before the network gets anywhere near as large as that, it would be safe for users to use Simplified Payment Verification (section 8) to check for double spending, which only requires having the chain of block headers, or about 12KB per day. Only people trying to create new coins would need to run network nodes. At first, most users would run network nodes, but as the network grows beyond a certain point, it would be left more and more to specialists with server farms of specialized hardware. A server farm would only need to have one node on the network and the rest of the LAN connects with that one node.
Satoshi Nakamoto July 14, 2010, 09:10:52 PM: I anticipate there will never be more than 100K nodes, probably less. It will reach an equilibrium where it’s not worth it for more nodes to join in. The rest will be lightweight clients, which could be millions.
At equilibrium size, many nodes will be server farms with one or two network nodes that feed the rest of the farm over a LAN.
Satoshi Nakamoto July 29, 2010, 02:00:38 AM: The current system where every user is a network node is not the intended configuration for large scale. That would be like every Usenet user runs their own NNTP server. The design supports letting users just be users. The more burden it is to run a node, the fewer nodes there will be. Those few nodes will be big server farms. The rest will be client nodes that only do transactions and don’t generate.
where “server farms” == “cluster” in our design.
Note that even the cost of running a fullnode can be higher, the node-running cost vs solo-mining cost is still significantly smaller, where we have the calculation as follows: