Document detail
ID

oai:arXiv.org:2410.12521

Topic
Electrical Engineering and Systems... Computer Science - Artificial Inte...
Author
Deshpande, Riya Dinesh Khan, Faheem A. Ahmed, Qasim Zeeshan
Category

Computer Science

Year

2024

listing date

10/23/2024

Keywords
network model vehicular spectrum
Metrics

Abstract

As the number of devices getting connected to the vehicular network grows exponentially, addressing the numerous challenges of effectively allocating spectrum in dynamic vehicular environment becomes increasingly difficult.

Traditional methods may not suffice to tackle this issue.

In vehicular networks safety critical messages are involved and it is important to implement an efficient spectrum allocation paradigm for hassle free communication as well as manage the congestion in the network.

To tackle this, a Deep Q Network (DQN) model is proposed as a solution, leveraging its ability to learn optimal strategies over time and make decisions.

The paper presents a few results and analyses, demonstrating the efficacy of the DQN model in enhancing spectrum sharing efficiency.

Deep Reinforcement Learning methods for sharing spectrum in vehicular networks have shown promising outcomes, demonstrating the system's ability to adjust to dynamic communication environments.

Both SARL and MARL models have exhibited successful rates of V2V communication, with the cumulative reward of the RL model reaching its maximum as training progresses.

Deshpande, Riya Dinesh,Khan, Faheem A.,Ahmed, Qasim Zeeshan, 2024, Spectrum Sharing using Deep Reinforcement Learning in Vehicular Networks

Document

Open

Share

Source

Articles recommended by ES/IODE AI

Multiplexed live-cell imaging for drug responses in patient-derived organoid models of cancer
cell organoid patient-derived kinetic system effects cancer pdo models