Autonomous Agents and Multi-Agent Systems

Papers
(The TQCC of Autonomous Agents and Multi-Agent Systems is 4. The table below lists those papers that are above that threshold based on CrossRef citation counts [max. 250 papers]. The publications cover those that have been published in the past four years, i.e., from 2021-11-01 to 2025-11-01.)
ArticleCitations
Tackling school segregation with transportation network interventions: an agent-based modelling approach219
A framework for trust-related knowledge transfer in human–robot interaction52
Optimal matchings with one-sided preferences: fixed and cost-based quotas38
Enabling imitation-based cooperation in dynamic social networks35
Accountability in multi-agent organizations: from conceptual design to agent programming32
Guest editorial: special issue on fair division30
On-line estimators for ad-hoc task execution: learning types and parameters of teammates for effective teamwork26
Approximating voting rules from truncated ballots25
A formal testing method for multi-agent systems using colored Petri nets21
Large-scale agent-based simulations of online social networks18
Parameterized complexity of multiwinner determination: more effort towards fixed-parameter tractability18
Online Markov decision processes with non-oblivious strategic adversary16
Diffusion auction design with transaction costs15
Information elicitation mechanisms for Bayesian auctions13
Changing criteria weights to achieve fair VIKOR ranking: a postprocessing reranking approach13
Fairness criteria for allocating indivisible chores: connections and efficiencies12
Solving multi-agent games on networks12
Safe Pareto improvements for delegated game playing12
Warmth and competence in human-agent cooperation12
Adaptation Procedure in misinformation games12
Deploying vaccine distribution sites for improved accessibility and equity to support pandemic response11
A normative approach for resilient multiagent systems11
An introduction to computational argumentation research from a human argumentation perspective10
Exploring the influence of a user-specific explainable virtual advisor on health behaviour change intentions10
A survey of multi-agent deep reinforcement learning with communication10
On fair and efficient solutions for budget apportionment10
Multivariate algorithmics for eliminating envy by donating goods9
One-sided matching markets with endowments: equilibria and algorithms9
The complexity of verifying popularity and strict popularity in altruistic hedonic games9
Relationship design for socially-aware behavior in static games8
Symbolic knowledge injection meets intelligent agents: QoS metrics and experiments7
Accelerating deep reinforcement learning via knowledge-guided policy network7
Fast approximate bi-objective Pareto sets with quality bounds6
RGS$$^\oplus $$: RDF graph synchronization for collaborative robotics6
Privacy leakage of search-based multi-agent planning algorithms6
Equitability and welfare maximization for allocating indivisible items6
Theoretical properties of the MiCRO negotiation strategy5
The Cost and Complexity of Minimizing Envy in House Allocation5
Scalar reward is not enough: a response to Silver, Singh, Precup and Sutton (2021)5
Using psychological characteristics of situations for social situation comprehension in support agents5
Towards interactive explanation-based nutrition virtual coaching systems5
Gini index based initial coin offering mechanism4
Effect of asynchronous execution and imperfect communication on max-sum belief propagation4
The complexity of election problems with group-separable preferences4
Mandrake: multiagent systems as a basis for programming fault-tolerant decentralized applications4
Quantifying the effects of environment and population diversity in multi-agent reinforcement learning4
Unravelling multi-agent ranked delegations4
A game-theoretic approach for hierarchical epidemic control4
A refined complexity analysis of fair districting over graphs4
How to turn an MAS into a graphical causal model4
Differentially private multi-agent constraint optimization4
0.088785886764526