Program: July 23, 2021, 10AM12PM EDT
 10:00am10:20am EDT
 10:20am10:40am EDT
 10:40am11:00am EDT
 Gather.town: moderated discussion of recent NetEcontopic papers that have been most influential on you
 11:00am11:20am EDT
 11:20am11:40am EDT
 11:40am12:00pm EDT
 Gather.town: open discussion
Paper details:
by Yongkang Guo, Zhihuan Huang, Yuqing Kong and Qian Wang
Abstract: Community structure is an important feature of many networks. One of the most popular ways to capture community structure is using a quantitative measure, modularity, which can serve as both a standard benchmark comparing different community detection algorithms, and a optimization objective for detecting communities. Previous works on modularity mainly focus on the approximation method for modularity maximization to detect communities, or minor modifications to the definition.
In this paper, we study modularity from an informationtheoretical perspective and show that modularity and mutual information in networks are essentially the same. The main contribution is that we develop a family of generalized modularity measure, $f$Modularity, which includes the original modularity as a special case. At a high level, we show the significance of community structure is equivalent to the amount of information contained in the network. On the one hand, $f$Modularity has an informationtheoretical interpretation and enjoys the desired properties of mutual information measure. On the other hand, quantifying community structure also provides an approach to estimate the mutual information between discrete random samples with a large value space but given only limited samples. We demonstrate the algorithm for optimizing $f$Modularity in a relatively general case, and validate it through experimental results on simulated networks. We also apply $f$Modularity to realworld market networks. Our results bridge two important fields, complex network and information theory, and also shed light on the design of measures on community structure in the future.
by Christos Papadimitriou, Kiran Vodrahalli and Mihalis Yannakakis
Abstract: Online firms deploy suites of software platforms, where each platform is designed to interact with users during a certain activity, such as browsing, chatting, socializing, emailing, driving, etc. The economic and incentive structure of this exchange, as well as its algorithmic nature, have not been explored to our knowledge. We model this interaction as a Stackelberg game between a Designer and one or more Agents. We model an Agent as a Markov chain whose states are activities; we assume that the Agent's utility is a linear function of the steadystate distribution of this chain. The Designer may design a platform for each of these activities/states; if a platform is adopted by the Agent, the transition probabilities of the Markov chain are affected, and so is the objective of the Agent. The Designer's utility is a linear function of the steady state probabilities of the accessible states (that is, the ones for which the platform has been adopted), minus the development cost of the platforms. The underlying optimization problem of the Agent  that is, how to choose the states for which to adopt the platform  is an MDP. If this MDP has a simple yet plausible structure (the transition probabilities from one state to another only depend on the target state and the recurrent probability of the current state) the Agent's problem can be solved by a greedy algorithm. The Designer's optimization problem (designing a custom suite for the Agent so as to optimize, through the Agent's optimum reaction, the Designer's revenue), is in general NPhard to approximate within any finite ratio; however, in the special case, while still NPhard, has an FPTAS. These results generalize, under mild additional assumptions, from a single Agent to a distribution of Agents with finite support, as well as to the setting where other Designers have already created platforms, and the Designer must find the best response to the strategies of the other Designers. We discuss other implications of our results and directions of future research.
by Matheus Venturyne Xavier Ferreira, Daniel J. Moroz, David C. Parkes and Mitchell Stern
Abstract: In recent years, prominent blockchain systems such as Bitcoin and Ethereum have experienced explosive growth in transaction volume, leading to frequent surges in demand for limited block space, causing transaction fees to fluctuate by orders of magnitude. Under the standard firstprice auction approach, users find it difficult to estimate how much they need to bid to get their transactions accepted (balancing the risk of delay with a preference to avoid paying more than is necessary).
In light of these issues, new transaction fee mechanisms have been proposed, most notably EIP1559, proposed by \citet{buterin2019eip1559}. A problem with EIP1559 is that under market instability, it again reduces to a firstprice auction. Here, we propose dynamic postedprice mechanisms, which are {\em ex post} Nash incentive compatible for myopic bidders and dominant strategy incentive compatible for myopic miners. We give sufficient conditions for which our mechanisms are stable and approximately welfare optimal in the probabilistic setting where each time step, bidders are drawn i.i.d. from a static (but unknown) distribution. Under this setting, we show instances where our dynamic mechanisms are stable, but EIP1559 is unstable. Our main technical contribution is an iterative algorithm that, given oracle access to a Lipschitz continuous and concave function $f$, converges to a fixed point of $f$.
by Meng Zhang, Ermin Wei and Randall Berry
Abstract: Federated learning enables machine learning algorithms to be trained over multiple decentralized edge devices without requiring the exchange of local datasets. Successfully deploying federated learning requires ensuring that agents (e.g., mobile devices) faithfully execute the intended algorithm, which has been largely overlooked in the literature. In this study, we first use risk bounds to analyze how the key feature of federated learning, unbalanced and noni.i.d. data, affects agents' incentives to voluntarily participate and obediently follow traditional federated learning algorithms. Our analysis reveals that agents with less typical data distributions and relatively more samples are more inclined to opt out of or tamper with federated learning algorithms. We then design a Faithful Federated Learning (FFL) mechanism which approximates the Vickrey–Clarke–Groves (VCG) payments via an incremental computation. We show that it achieves (probably approximate) optimality, faithful implementation, voluntary participation, and budget balance. Further, the time complexity of computing all agents' payments in the number of agents is $\mathcal{O}(1)$.
From 2020, the following papers which were presented virtually are carried forward:

“Learning Opinions in Social Networks” by Vincent Conitzer, Debmalya Panigrahi and Hanrui Zhang
 Invited Discussants: Wei Chen and David Kempe
 Chair: Grant Schoenebeck
 Abstract: We study the problem of learning opinions in social networks. The learner observes the states of some sample nodes from a social network, and tries to infer the states of other nodes, based on the structure of the network. We show that sampleefficient learning is impossible when the network exhibits strong noise, and give a polynomialtime algorithm for the problem with nearly optimal sample complexity when the network is sufficiently stable.

“Towards Data Auctions With Externalities” by Anish Agarwal, Munther Dahleh, Thibaut Horel and Maryann Rui
 Invited Discussants: Dirk Bergemann and Tan Gan
 Chair: Heinrich Nax
 Abstract: The design of data markets has gained in importance as firms increasingly use predictions from machine learning models to make their operations more effective, yet need to externally acquire the necessary training data to fit such models. This is particularly true in the context of the Internet where an everincreasing amount of user data is being collected and exchanged. A property of such markets that has been given limited consideration thus far is the externality faced by a firm when data is allocated to other, competing firms. Addressing this is likely necessary for progress towards the practical implementation of such markets. In this work, we consider the case with n competing firms and a monopolistic data seller. We demonstrate that modeling the utility of firms solely through the increase in prediction accuracy experienced reduces the complex, combinatorial problem of allocating and pricing multiple data sets to an auction of a single digital (freely replicable) good. Crucially, this also enables us to model the negative externalities experienced by a firm resulting from other firms’ allocations as a weighted directed graph. We obtain forms of the welfaremaximizing and revenuemaximizing auctions for such settings. and highlight how the form of the firms’ private information – whether they know the externalities they exert on others or that others exert on them – affects the structure of the optimal mechanisms. We find that in all cases, the optimal allocation rules turn out to be single thresholds (one per firm), in which the seller allocates all information or none of it to a firm.

“A ClosedLoop Framework for Inference, Prediction and Control of SIR Epidemics on Networks” by Ashish R. Hota, Jaydeep Godbole, Sanket Kumar Singh and Philip E. Pare
 Invited Discussants: Kuang Xu and Lei Ying
 Chair: Longbo Huang
 Abstract: Motivated by the ongoing pandemic COVID19, we propose a closedloop framework that combines inference from testing data, learning the parameters of the dynamics and optimal resource allocation for controlling the spread of the susceptibleinfectedrecovered (SIR) epidemic on networks. Our framework incorporates several key factors present in testing data, such as high risk individuals are more likely to undergo testing and infected individuals can remain as asymptomatic carriers of the disease. We then present two tractable optimization problems to evaluate the tradeoff between controlling the growthrate of the epidemic and the cost of nonpharmaceutical interventions (NPIs). Our results provide critical insights for policymakers, including the emergence of a second wave of infections if NPIs are prematurely withdrawn.