Documentdetail
ID kaart

oai:arXiv.org:2410.12521

Onderwerp
Electrical Engineering and Systems... Computer Science - Artificial Inte...
Auteur
Deshpande, Riya Dinesh Khan, Faheem A. Ahmed, Qasim Zeeshan
Categorie

Computer Science

Jaar

2024

vermelding datum

23-10-2024

Trefwoorden
network model vehicular spectrum
Metriek

Beschrijving

As the number of devices getting connected to the vehicular network grows exponentially, addressing the numerous challenges of effectively allocating spectrum in dynamic vehicular environment becomes increasingly difficult.

Traditional methods may not suffice to tackle this issue.

In vehicular networks safety critical messages are involved and it is important to implement an efficient spectrum allocation paradigm for hassle free communication as well as manage the congestion in the network.

To tackle this, a Deep Q Network (DQN) model is proposed as a solution, leveraging its ability to learn optimal strategies over time and make decisions.

The paper presents a few results and analyses, demonstrating the efficacy of the DQN model in enhancing spectrum sharing efficiency.

Deep Reinforcement Learning methods for sharing spectrum in vehicular networks have shown promising outcomes, demonstrating the system's ability to adjust to dynamic communication environments.

Both SARL and MARL models have exhibited successful rates of V2V communication, with the cumulative reward of the RL model reaching its maximum as training progresses.

Deshpande, Riya Dinesh,Khan, Faheem A.,Ahmed, Qasim Zeeshan, 2024, Spectrum Sharing using Deep Reinforcement Learning in Vehicular Networks

Document

Openen

Delen

Bron

Artikelen aanbevolen door ES/IODE AI

Choice Between Partial Trajectories: Disentangling Goals from Beliefs
agents models aligned based bootstrapped learning reward function model return choice choices partial