SCREEN‎ > ‎

Simulator

A space cognitive radio simulator has been implemented within SCREEN. The implementation is based on the well known ns-3 network simulator (https://www.nsnam.org/), and considers propagation loss and delay models, realistic satellite orbits, and supports the placement of interferer nodes.

The parameters of the simulation represent realistic communication scenarios. In the above depicted scenario, a Low Earth Orbit Satellite (LEO) and two Geostationary Satellites (GEOs) (not visible) are continuously requesting to deploy data to two ground stations at INESC TEC in Porto, Portugal, and at MIG in Munich, Germany. In addition, data can also be deployed to an Unmanned Areal Vehicle (UAV) which is placed near the MIG ground station. The LEO is transmitting data with 50 Watts of power and with an antenna gain of 5 dBi and the GEOs are transmitting data with 200 Watts of power and with an antenna gain of 35 dBi. The ground stations are signaling readiness with a transmission power of 2 Watts and an antenna gain of 5 dBi. The channel bandwidth is chosen to be 20 kHz.

As depicted in the above figure, for each simulation time step, the current communication status can be inspected by means of a popup window. In the depicted example, the LEO IRIDIUM 2 is receiving a readiness signal from the MIG ground station at channel 2 with a good signal-to-interference-and-noise ratio (SINR). The chosen modulation is BPSK. No drop flag has been set, which means that the packet is received successfully. Data is transmitted via the data channels 3 or 4, where the channel is either selected randomly (with a risk that the transmission fails due to interference) or based on knowledge about the optimal SINR achievable by choosing the least busy channel. In the next step, a cognitive algorithm can be applied in order to select the transmission channel based on learning from observed environmental parameters.


In the current scenario, Unmanned Areal Vehicles (UAV) serve as additional sources of interference. As can be observed in the above figure, the UAV is stationary (no visible track) and is placed 100 m above ground in the vicinity of the MIG ground station in Munich.



Update on 25 November 2016


From single satellite to multi-satellite configurations
Figure 1 - tracks of IRIDIUM satellites with Line of Sight (LoS) to the two Ground Stations during a simulation time of 1 hour.

In order to evaluate the performance of Cognitive Algorithms (CA) in the task of data transfer channel attribution, a reference satellite configuration has been defined. The reference configuration comprises two Geostationary (GEO) satellites, AFRISTAR and STAR ONE C2, the "full" IRIDIUM constellation, consisting of 90 Low Earth Orbit (LEO) satellites and two Ground Stations (GS), one located at Berlin, Germany and one at Penzance, England. The reference configuration has been chosen to be as simple as possible (in order to facilitate data interpretation), but nevertheless to contain all of the important features of a real-world use case.


Q-Learning performance evaluation for high and low learning rates
Figure 2 -  comparison of the channel assignments for three different modes as a function of time (assignment decisions) for higher learning rate
Figure 3 - comparison of the channel assignments for three different modes as a function of time (assignment decisions) for lower learning rate

Q-Learning has been chosen as the default cognitive technique for data transfer channel attribution. The key element of the algorithm is a table, where each row represents one of the possible data channels and each column represents a transition to the channel of the corresponding row. To each of these channel-transition pairs a (quality) Q-value is assigned, which indicates if a certain action (transition to another channel or maintaining the current state) is promising or not. After a data channel has been chosen, the corresponding Q-value is updated according to the level of noise and interference found in this channel (a high level of noise and interference leads to a lower Q-value). The pictures (Figure 2 and Figure 3) show a comparison of the channel assignments (data channels are 3 and 4) for three different modes as a function of time (assignment decisions). The orange line represents a channel attribution method which is "optimal" in the sense that it has access to the level of noise and interference in all channels before a choice is made. This knowledge is naturally not accessible in a real world scenario. On the other hand, random channel attribution has been chosen as a method which is expected to perform worse than any Cognitive Algorithm, and is shown in dark blue. For a high learning rate (Figure 2), the channel attribution of the Cognitive Algorithms is shown in light blue and can be seen to perform very similar to the optimal mode. With a lower learning rate, the adaption of the algorithm is correspondingly slower, as can be seen in Figure 3.


Accumulated penalty over time
Figure 4 - Accumulated penalty over time (assignment decisions) for a higher learning rate.

Figure 5 - Accumulated penalty over time (assignment decisions) for a lower learning rate.

In a first step to quantify the performance of the Cognitive Algorithm (CA), the channel assignments of the random (dark blue line) and Q-Learning (light blue line) approaches are directly compared with the "optimal" channel assignment (orange line). A penalty factor of +1 is introduced if a different channel is chosen than in the optimal case. The pictures show the accumulated penalty over time (assignment decisions) for a higher (Figure 4) and a lower (Figure 5) learning rate. The dashed line indicates the worst possible channel attribution with respect to the optimal choice. One can clearly see that the Cognitive Algorithm outperforms random channel attribution where channel transitions occur at a rate, which gives the CA reasonable time to adapt.



Simulator code released under GPL license

The SCREEN simulation framework code has just been released under the general public license, at the GitHub repositories. The code can be found in the link below.

https://github.com/Munich-Innovation-Labs/screen-visualization