Tag Archives: graph theory

When random people give money to random other people

A post on Decision Science about a problem of Uri Wilensky‘s has been making the rounds:

Imagine a room full of 100 people with 100 dollars each. With every tick of the clock, every person with money gives a dollar to one randomly chosen other person. After some time progresses, how will the money be distributed?

People often expect the distribution to be close to uniform.  But this isn’t right; the simulations in the post show clearly that inequality of wealth rapidly appears and then persists (though each individual person bobs up and down from rich to poor.)  What’s going on?  Why would this utterly fair and random process generate winners and losers?

Here’s one way to think about it.  The possible states of the system are the sets of nonnegative integers (m_1, .. m_100) summing to 10,000; if you like, the lattice points inside a simplex.  (From now on, let’s write N for 100 because who cares if it’s 100?)

The process is a random walk on a graph G, whose vertices are these states and where two vertices are connected if you can get from one to the other by taking a dollar from one person and giving it to another.  We are asking:  when you run the random walk for a long time, where are you on this graph?  Well, we know what the stationary distribution for random walk on an undirected graph is; it gives each vertex a probability proportional to its degree.  On a regular graph, you get uniform distribution.

Our state graph G isn’t regular, but it almost is; most nodes have degree N, where by “most” I mean “about 1-1/e”; since the number of states is

N^2 + N - 1 \choose N-1

and, of these, the ones with degree N are exactly those in which nobody’s out of money; if each person has a dollar, the number of ways to distribute the remaining N^2 – N dollars is

N^2  - 1 \choose N-1

and so the proportion of states where someone’s out of money is about

\frac{(N^2 - 1)^N}{(N^2 + N - 1)^N} \sim (1-1/N)^N \sim 1/e.

So, apart from those states where somebody’s broke, in the long run every possible state is equally likely;  we are just as likely to see $9,901 in one person’s hands and everybody else with $1 as we are to see exact equidistribution again.

What is a random lattice point in this simplex like?  Good question!  An argument just like the one above shows that the probability nobody goes below $c is on order e^-c, at least when c is small relative to N; in other words, it’s highly likely that somebody’s very nearly out of money.

If X is the maximal amount of money held by any player, what’s the distribution of X?  I didn’t immediately see how to figure this out.  You might consider the continuous version, where you pick a point at random from the real simplex

(x_1, .. x_N) \in \mathbf{R}^N:   \sum x_i = N^2.

Equivalently; break a stick at N-1 randomly chosen points; what is the length of the longest piece?  This is a well-studied problem; the mean size of the longest piece is about N log N.  So I guess I think maybe that’s the expected value of the net worth of the richest player?

But it’s not obvious to me whether you can safely approximate the finite problem by its continuous limit (which corresponds to the case where we keep the number of players at N but reduce the step size so that each player can give each other a cent, or a picocent, or whatever.)

What happens if you give each of the N players just one dollar?  Now the uniformity really breaks down, because it’s incredibly unlikely that nobody’s broke.  The probability distribution on the set of (m_1, .. m_N) summing to N assigns each vector a probability proportional to the size of its support (i.e. the number of m_i that are nonzero.)  That must be a well-known distribution, right?  What does the corresponding distribution on partitions of N look like?

Update:  Kenny Easwaran points out that this is basically the same computation physicists do when they compute the Boltzmann distribution, which was new to me.

 

 

Tagged , ,

Metric chromatic numbers and Lovasz numbers

In the first post of this series I asked whether there was a way to see the Lovasz number of a graph as a chromatic number.  Yes!  I learned it from these notes, written by Big L himself.

Let M be a metric space, and let’s assume for simplicity that M has a transitive group of isometries.  Now write r_M(n) for the radius of the smallest ball containing n points whose pairwise distances are all at least 1.  (So this function is controlling how sphere-packing works in M.)

Let Γ be a graph.  By an M-coloring of Γ we now mean a map from v(Γ) to M such that adjacent vertices are at distance at least 1.  Write χ_Γ(M) for the radius of the smallest disc containing an M-coloring of Γ.  Then we can think of r^{-1}(χ_Γ(M)) as a kind of “M-chromatic number of Γ.”  Scare quotes are because r isn’t necessarily going to be an analytic function or anything; if I wanted to say something literally correct I guess I would say the smallest integer n such that r_M(n) >= χ_Γ(M).

The M-chromatic number is less than the usual chromatic number χ_Γ;  more precisely,

χ_Γ(M) <= r_M(χ_Γ)

Easy:  if there’s an n-coloring of Γ, just compose it with the map from [n] to M of radius r_M(n).  Similary, if ω_Γ is the clique number of Γ, we have

r_M(ω_Γ) <= χ_Γ(M)

because a k-clique can’t be embedded in a ball of radius smaller than r_M(k).

So this M-chromatic number gives a lower bound for the chromatic number and an upper bound for the clique number, just as the Lovasz number does, and just as the fractional chromatic number does.

Example 1:  Lovasz number.  Let M be the sphere in infinite-dimensional Euclidean space.  (Or |Γ|-dimensional Euclidean space, doesn’t matter.)  For our metric use (1/sqrt(2)) Euclidean distance, so that orthogonal vectors are at distance 1 from each other.  If n points are required at pairwise distance at least 1, the closest way to pack them is to make them orthonormal (I didn’t check this, surely easy) and in this case they sit in a ball of radius 1-sqrt(1/2n) around their center of mass.  So r_M(n) = 1 – sqrt(1/2n).  Define t(Γ) to be the real number such that

1 - \sqrt{1/2t(\Gamma)} = \chi_\Gamma(M).

Now I was going to say that t(Γ) is the Lovasz theta number of Γ, but that’s not exactly the definition; that would be the definition if I required the embedding to send adjacent vertices to points at distance exactly 1.  The answer to this MO question suggests that an example of Schrijver might actually separate these invariants, but I haven’t checked.

So I guess let’s say t(Γ) is a “Lovasz-like number” which is between the clique number and the chromatic number.  And like the Lovasz number, but unlike clique and chromatic numbers, it’s super-easy to compute.  An embedding of v(Γ) in the sphere, up to rotation, is specified by the pairwise distance matrix, which can be an arbitrary postive definite symmetric nxn matrix A with 1’s on the diagonal.  Each edge of Γ now gives an inequality a_{ij} > 1.  When you’re optimizing over a space cut out by linear inequalities in the space of psd matrices, you’re just doing semidefinite programming.  (I am punting a little about how to optimize “radius” but hopefully maximum distance of any vector from center of mass is good enough?)

(Note:  you know what, I’ll bet you can take an embedding like this, subtract a small multiple of the center of mass from all the vectors, and get an embedding of v(Γ) in n-space with all angles between adjacent vectors slightly obtuse, and probably this ends up being exactly the same thing as the vector chromatic number defined in the paper I linked to earlier.)

Where is example 2?  It was supposed to be about the fractional chromatic number but then I realized the way I was setting this up wasn’t correct.  The idea is to let M_b be the space of infinite bit strings with exactly b 1’s and use (1/2b) Hamming distance, so that the distance-1 requirement becomes a requirement that two b-element subsets be disjoint.  But I don’t think this quite fits into the framework I adopted at the top of the post.  I’ll circle back to this if I end up having what to say.

 

 

 

Tagged , ,

Coloring graphs with polynomials

More chromatic hoonja-doonja!

Suppose you have a bunch of tokens of different colors and weights.  X_1 colors of weight 1 tokens, X_2 colors of weight 2 tokens, etc.

Let Γ be a graph.  A (weighted) b-coloring of Γ is an assignment to each vertex of a set of tokens with total weight b, such that adjacent vertices have no tokens in common.  Let χ_Γ(X_1, … X_b) be the number of b-colorings of Γ.  I made up this definition but I assume it’s in the literature somewhere.

 

First of all, χ_Γ(X_1, … X_b) is a polynomial.

Is this multivariable “chromatic polynomial” of any interest?  Well, here’s one place it comes up.  By a degree-b polynomial coloring of Γ we mean an assignment of a monic squarefree degree d polynomial in R[x] to each vertex of Γ, so that adjacent vertices are labeled with coprime polynomials.   Now let U_b(Γ) be the manifold parametrizing degree-b colorings of Γ.

Then the Euler characteristic of U_b(Γ) is χ_Γ(-1,1,0,…0).

I worked this out via the same kind of Lefschetz computation as in the previous post, but once you get the answer, you can actually derive this as a corollary of Stanley’s theorem.  And it was presumably already known.

By the way:  let V_n be the free vector space spanned by the b-colorings of Γ where all the tokens have weight 1; these are called fractional colorings sometimes.  Then S_n acts on V_n by permutation of colors.  The character of this action is a class function on S_n.  More precisely, it is

χ_Γ(X_1, … X_b)

where X_i is now interpreted as a class function on S_n, sending a permutation to the number of i-cycles in its cycle decomposition.  Of course the real thing going on behind the scenes is that the V_n form a finitely generated FI-module.

Tagged , , ,

Counting acyclic orientations with topology

Still thinking about chromatic polynomials.   Recall: if Γ is a graph, the chromatic polynomial χ_Γ(n) is the number of ways to color the vertices of Γ in which no two adjacent vertices have the same color.

Fact:  χ_Γ(-1) is the number of acyclic orientations of Γ.

This is a theorem of Richard Stanley from 1973.

Here’s a sketch of a weird proof of that fact, which I think can be made into an actual weird proof.  Let U be the hyperplane complement

\mathbf{A}^|\Gamma| - \bigcup_{ij \in e(\Gamma)} (z_i = z_j)

Note that |U(F_q)| is just the number of colorings of Γ by elements of F_q; that is,  χ_Γ(q).  More importantly, the Poincare polynomial of the manifold U(C) is (up to powers of -1 and t) χ_Γ(-1/t).  The reason |U(F_q)| is  χ_Γ(q) is that Frobenius acts on H^i(U) by q^{-i}.  (OK, I switched to etale cohomology but for hyperplane complements everything’s fine.)  So what should  χ_Γ(-1) mean?  Well, the Lefschetz trace formula suggests you look for an operator on U(C) which acts as -1 on the H^1, whence as (-1)^i on the H^i.  Hey, I can think of one — complex conjugation!  Call that c.

Then Lefchetz says χ_Γ(-1) should be the number of fixed points of c, perhaps counted with some index.  But careful — the fixed point locus of c isn’t a bunch of isolated points, as it would be for a generic diffeo; it’s U(R), which has positive dimension!  But that’s OK; in cases like this we can just replace cardinality with Euler characteristic.  (This is the part that’s folkloric and sketchy.)  So

χ(U(R)) = χ_Γ(-1)

at least up to sign.  But U(R) is just a real hyperplane complement, which means all its components are contractible, so the Euler characteristic is just the number of components.  What’s more:  if (x_1, … x_|Γ|) is a point of U(R), then x_i – x_j is nonzero for every edge ij; that means that the sign of x_i – x_j is constant on every component of U(R).  That sign is equivalent to an orientation of the edge!  And this orientation is obviously acyclic.  Furthermore, every acyclic orientation can evidently be realized by a point of U(R).

To sum up:  acyclic orientations are in bijection with the connected components of U(R), which by Lefschetz are χ_Γ(-1) in number.

 

 

 

Tagged , , , , ,

Metric chromatic numbers

Idle thought.  Let G be a (loopless) graph, let M be a metric space, and let b be a nonnegative real number.  Then let Conf(G,M,b) be the space of maps from the vertices of the graph to M such that no two adjacent vertices are within b of each other.

When b=0 and G is the complete graph K_n, this is the usual ordered configuration space of n points on M.  When G is the empty graph on n vertices, it’s just M^n.  When M is a set of N points with the discrete metric, Conf(G,M,b) is the same thing for every b;  a set of points whose cardinality is χ_G(N), the chromatic polynomial of G evaluated at N.  When M is a box, Conf(G,M,b) is the “discs in a box” space I blogged about here.  If M is (Z/2Z)^k with Hamming distance, you are asking about how many ways you can supply G with k 2-colorings so that no edge is monochromatic in more than k-b-1 of them.

Update:  Ian Agol links in the comments to this paper about Conf(G,M,0) by Eastwood and Huggett; the paper points out that the Euler characteristic of Conf(G,M,0) computes χ_G(N) whenever M has Euler characteristic N; so M being N points works, but so does M = CP^{N-1}, and that’s the case they study.

When b=0 and G is the complex plane, Conf(G,C,0) is the complement of the graphic arrangement of G; its Poincare polynomial is  t^-{|G|} χ_G(-1/t).  Lots of graphs have the same chromatic polynomial (e.g. all trees do) but do they have homeomorphic Conf(G,C,0)?

This is fun to think about!  Is Conf(G,M,0), for various manifolds other than discrete sets of points, an interesting invariant of a graph?

You can think of

vol(Conf(G,M,b)) vol(M)^{-n}

as a sort of analogue of the chromatic polynomial, especially when b is small; when M = C, for instance, Conf(G,M,b) is just the complement of tubular neighborhoods around the hyperplanes in the graphical arrangement, and its volume should be roughly b^|G|χ_G(1/b) I think.  When b gets big, this function deviates from the chromatic polynomial, and in particular you can ask when it hits 0.

In other words:  you could define an M-chromatic number χ(G,M) to be the smallest B such that Conf(G,M,1/B) is nonempty.  When M is a circle S^1 with circumference 1, you can check that χ(G,M) is at least the clique number of G and at most the chromatic number.  If G is a (2n+1)-cycle, the clique number is 2, the chromatic number is 3, and the S^1-chromatic number is 2+1/n, if I did this right.  Does it have anything to do with the Lovasz number, which is also wedged between clique number and chromatic number?  Relevant here:  the vector chromatic number, which is determined by χ(G,S^{v(G)-1}), and which is in fact a lower bound for the Lovasz number.

 

 

 

 

Tagged , , ,

My Erdos-Bacon-Sabbath number is 11

I am pleased to report that I have an Erdös-Bacon-Sabbath number.

My Erdös number is 3; has been for a while, probably always will be.  I wrote a paper with Mike Bennett and Nathan Ng about solutions to A^4 + B^2 = C^p; Mike wrote with Florian Luca; Luca wrote with Erdös.

A while back, I shot a scene for the movie Gifted.  I’m not on the IMDB page yet, but I play against type as “Professor.”  Also in this movie is Octavia Spencer, who was in Beauty Shop (2005) with Kevin Bacon.  So my Bacon number is 2.

That gives me an Erdös-Bacon number of 5; already pretty high on the leaderboard!

Of course it then fell to me to figure out whether I have a Sabbath number.  Here’s the best chain I could make.

I once played guitar on “What Goes On” with my friend Jay Michaelson‘s band, The Swains, at Brownies.

Jay performed with Ezra Lipp “sometime in 2000,” he reports.

Lipp has played with Chris Robinson of the Black Crowes.

From here we use the Six Degrees of Black Sabbath tool, written by Paul Lamere at EchoNest (now part of the Spotify empire.)

The Black Crowes backed up Jimmy Page at a concert in 1999.

Page played with David Coverdale in Coverdale.Page.

David Coverdale was in Deep Purple with Glenn Hughes of Black Sabbath.

So my Sabbath number is 6, and my Erdos-Bacon-Sabbath number is 11.

 

 

 

 

 

 

Tagged , , , ,

What I learned at the Joint Math Meetings

Another Joint Meetings in the books!  My first time in San Antonio, until last weekend the largest US city I’d never been to.  (Next up:  Jacksonville.)  A few highlights:

  • Ngoc Tran, a postdoc at Austin, talked about zeroes of random tropical polynomials.  She’s proved that a random univariate tropical polynomial of degree n has about c log n roots; this is the tropical version of an old theorem of Kac, which says that a random real polynomial of degree n has about c log n real roots.  She raised interesting further questions, like:  what does the zero locus of a random tropical polynomial in more variables look like?  I wonder:  does it look anything like the zero set of a random band-limited function on the sphere, as discussed by Sarnak and Wigman?  If you take a random tropical polynomial in two variables, its zero set partitions the plane into polygons, which gives you a graph by adjacency:  what kind of random graph is this?
  • Speaking of random graphs, have you heard the good news about L^p graphons?  I missed the “limits of discrete structures” special session which had tons of talks about this, but I ran into the always awesome Henry Cohn, who gave me the 15-minute version.  Here’s the basic idea.  Large dense graphs can be modeled by graphons; you take a symmetric function W from [0,1]^2 to [0,1], and then your procedure for generating a random graph goes like this. Sample n points x_1,…x_n uniformly from [0,1] — these are your vertices.  Now put an edge between x_i and x_j with probability W(x_i,x_j) = W(x_j,x_i).  So if W is constant with value p, you get your usual Erdös-Renyi graphs, but if W varies some, you can get variants of E-R, like the much-beloved stochastic blockmodel graphs, that have some variation of edge density.  But not too much!  These graphon graphs are always going to have almost all vertices with degree linear in n.  That’s not at all like the networks you encounter in real life, which are typically sparse (vertex degrees growing sublinearly in n, or even being constant on average) and typically highly variable in degree (e.g. degrees following a power law, not living in a band of constant multiplicative width.)  The new theory of L^p graphons is vastly more general.  I’ve only looked at this paper for a half hour but I feel like it’s the answer to a question that’s always bugged me; what are the right descriptors for the kinds of random graphs that actually occur in nature?  Very excited about this, will read it more, and will give a SILO seminar about it on February 4, for those around Madison.
  • Wait, I’ve got still one more thing about random graphs!  Russ Lyons gave a plenary about his work with Angel and Kechris about unique ergodicity of the action of the automorphism group of the random graph.  Wait, the random graph? I thought there were lots of random graphs!  Nope — when you try to define the Erdös-Renyi graph on countably many vertices, there’s a certain graph (called “the Rado graph”) to which your random graph is isomorphic with probability 1!  What’s more, this is true — and it’s the same graph — no matter what p is, as long as it’s not 0 or 1!  That’s very weird, but proving it’s true is actually pretty easy.  I leave it an exercise.
  • Rick Kenyon gave a beautiful talk about his work with Aaron Abrams about “rectangulations” — decompositions of a rectangle into area-1 subrectangles.  Suppose you have a weighted directed graph, representing a circuit diagram, where the weights on the edges are the conductances of the corresponding wires.  It turns out that if you fix the energy along each edge (say, to 1) and an acyclic orientation of the edges, there’s a unique choice of edge conductances such that there exists a Dirichlet solution (i.e. an energy-minimizing assignment of a voltage to each node) with the given energies.  These are the fibers of a rational map defined over Q, so this actually gives you an object over a (totally real) algebraic number field for each acyclic orientaton.  As Rick pointed out, this smells a little bit like dessins d’enfants!  (Though I don’t see any direct relation.)  Back to rectangulations:  it turns out there’s a gadget called the “Smith Diagram” which takes a solution to the Dirichlet problem on the graph  and turns it into a rectangulation, where each edge corresponds to a rectangle, the area of the rectangle is the energy contributed by the current along that edge, the aspect ratio of the rectangle is the conductance, the bottom and top faces of the rectangle correspond to the source and target nodes, the height of a face is the voltage at that node, and etc.  Very cool!  Even cooler when you see the pictures.  For a 40×40 grid, it looks like this:

 

Tagged , , , , ,

The existence of designs

The big news in combinatorics is this new preprint by Peter Keevash, which proves the existence of Steiner systems, or more generally combinatorial designs, for essentially every system of parameters where the existence of such a design isn’t ruled out on divisibility grounds.  Remarkable!

I’m not going to say anything about this paper except to point out that it has even more in it than is contained in the top-billed theorem; the paper rests on the probabilistic method, which in this case means, more or less, that Keevash shows that you can choose a “partial combinatorial design” in an essentially random way, and with very high probability it will still be “close enough” that by very careful modifications (or, as Keevash says, “various applications of the nibble” — I love the names combinatorists give their techniques) you can get all the way to the desired combinatorial design.

This kind of argument is very robust!  For instance, Keevash gets the following result, which in a way I find just as handsome as the result on designs.  Take a random graph on n vertices — that is, each edge is present with probability 1/2, all edges independent.  Does that graph have a decomposition into disjoint triangles?  Well, probably not, right?  Because a union of triangles has to have even degree at each vertex, while the random graph is going to have n/2 of its vertices with odd degree. (This is the kind of divisibility obstruction I mentioned in the first paragraph.)  In fact, this divisibility argument shows that if the graph can be decomposed as a union of triangles with M extra edges, M has to be at least n/4 with high probability, since that’s how many edges you would need just to dispose of the odd-degree vertices.  And what Keevash’s theorem shows is there really is (with high probability) a union of disjoint triangles that leaves only (1+o(1))(n/4) edges of the random graph uncovered!

More details elsewhere from Vuhavan and Gil Kalai.

Tagged , ,

How much is the stacks project graph like a random graph?

Cathy posted some cool data yesterday coming from the new visualization features of the magnificent Stacks Project.  Summary:  you can make a directed graph whose vertices are the 10,445 tagged assertions in the Stacks Project, and whose edges are logical dependency.  So this graph (hopefully!) doesn’t have any directed cycles.  (Actually, Cathy tells me that the Stacks Project autovomits out any contribution that would create a logical cycle!  I wish LaTeX could do that.)

Given any assertion v, you can construct the subgraph G_v of vertices which are the terminus of a directed path starting at v.  And Cathy finds that if you plot the number of vertices and number of edges of each of these graphs, you get something that looks really, really close to a line.

Why is this so?  Does it suggest some underlying structure?  I tend to say no, or at least not much — my guess is that in some sense it is “expected” for graphs like this to have this sort of property.

Because I am trying to get strong at sage I coded some of this up this morning. One way to make a random directed graph with no cycles is as follows:  start with N edges, and a function f on natural numbers k that decays with k, and then connect vertex N to vertex N-k (if there is such a vertex) with probability f(k).  The decaying function f is supposed to mimic the fact that an assertion is presumably more likely to refer to something just before it than something “far away” (though of course the stack project is not a strictly linear thing like a book.)

Here’s how Cathy’s plot looks for a graph generated by N= 1000 and f(k) = (2/3)^k, which makes the mean out-degree 2 as suggested in Cathy’s post.

stacksgraph_expmean2

Pretty linear — though if you look closely you can see that there are really (at least) a couple of close-to-linear “strands” superimposed! At first I thought this was because I forgot to clear the plot before running the program, but no, this is the kind of thing that happens.

Is this because the distribution decays so fast, so that there are very few long-range edges? Here’s how the plot looks with f(k) = 1/k^2, a nice fat tail yielding many more long edges:

stacksgraph_inversesquare

My guess: a random graph aficionado could prove that the plot stays very close to a line with high probability under a broad range of random graph models. But I don’t really know!

Update:  Although you know what must be happening here?  It’s not hard to check that in the models I’ve presented here, there’s a huge amount of overlap between the descendant graphs; in fact, a vertex is very likely to be connected all but c of the vertices below it for a suitable constant c.

I would guess the Stacks Project graph doesn’t have this property (though it would be interesting to hear from Cathy to what extent this is the case) and that in her scatterplot we are not measuring the same graph again and again.

It might be fun to consider a model where vertices are pairs of natural numbers and (m,n) is connected to (m-k,n-l) with probability f(k,l) for some suitable decay.  Under those circumstances, you’d have substantially less overlap between the descendant trees; do you still get the approximately linear relationship between edges and nodes?

Tagged , , , ,

In which I have a quarter-million friends of friends on Facebook

One of the privacy options Facebook allows is “restrict to friends of friends.”  I was discussing with Tom Scocca the question of how many people this actually amounts to.  FB doesn’t seem to offer an easy way to get a definitive accounting, so I decided to use the new Facebook Graph Search to make a quick and dirty estimate.  If you ask it to show you all the friends of your friends, it just tells you that there are more than 1000, but doesn’t supply an exact number.  If you want a count, you have to ask it something more specific, like “How many friends of my friends are named Constance?”

In my case, the answer is 25.

So what does that mean?  Well, according to the amazing NameVoyager, between 100 and 300 babies per million are named Constance, at least in the birthdate range that contains most of Facebook’s user base and, I expect, most of my friends-of-friends (herafter, FoFs) as well.  So under the assumption that my FoFs are as likely as the average American to be named Constance, there should be between 85,000 and 250,000 FoFs.

That assumption is massively unlikely, of course; name choices have strong correlations with geography, ethnicity, and socioeconomic thingamabobs.  But you can just do this redundantly to get a sense of what’s going on.  59 of my FoFs are named Marianne, a name whose frequency ranges from 150-300 parts per million; that suggests a FoF range of about 200-400K.

I did this for a few names (50 Geralds, 18 Charitys (Charities??)) and the overlaps of the ranges seemed to hump at around 250,000, so that’s my vague estimate for the number.

Bu then I remembered that there was actually a paper about this on the arXiv, “The Anatomy of the Facebook Graph,” by Ugander, Karrer, Backstrom, and Marlow, which studies exactly this question.  They found something which is, to me, rather surprising; that the number of FoFs grows approximately linearly in the number of friends.  The appropriate coefficients have surely changed since 2011, but they get a good fit with

#FoF = 355(#friends) – 15057.

For me, with 680 friends, that’s 226,343.  Good fit!

This 2012 study from Pew (on which Marlow is also an author) studies a sample in which the respondents had a mean 245 Facebook friends, and finds that the mean number of FoFs was 156,569.  Interestingly, the linear model from the earlier paper gives only 72,000, though to my eye it looks like 245 is well within the range where the fit to the line is very good.

The math question this suggests:  in the various random-graph models that people like to use to study social networks, what is the mean size of the 2-neighborhood of x (i.e. the number of FoFs) conditional on x having degree k?  Is it ever linear in k, or approximately linear over some large range of k?

Tagged , , ,
%d bloggers like this: