– Could we take the same decision with much less data?
– Would we take a better decision if we had more information?
Where is the information?
jamheras
Sticky Note
Most people agree that the noise is not relevant for this plot. However, its importance depends on what is the intended use (what we need this data for). If you are sensor manufacturer the noise readings might be very relevant if your goal is to reduce it for the new generation of sensors. On the other hand, if you a spacecraft operations engineers ... probably having this noise or not would not change any of the decisions that you will make.
Data looks continuous …
… but in reality there are only samples that we connect
jamheras
Sticky Note
We are so used to connect samples with lines that we often don't realize how data is in reality. Fractal Resampling removes the samples that lie in the line that connects two other samples (with a max. error guarantee)
Real Example – original
Max error = 0%10003 sampling points
jamheras
Sticky Note
The next slide shows what 'lossy compression' can be achieved by removing the data that does not bring that much information. For this we will use 1% error (0.42 degrees Celsius). Can you guess how many points the resampled version will have?
Real Example – Resampled 1% error
Max error = 1%356 sampling points
Real Example – Resampled 2% error
Max error = 2%67 sampling points
Inspiration
1. Generation of random fractal terrain
a. Game programming technique
jamheras
Sticky Note
Our inspiration for the Fractal Resampling technique came from the way random fractal terrain is generated in video games. This is typically used in flight simulators. Our approach is easier because we will work on 2D and we don't need to apply any textures (rock, vegetation, etc)
Inspiration: 2D Fractal terrain
2D Mid-point displacement
1. For every segment
a. Locate mid point
b. Displace it randomly
jamheras
Sticky Note
The silhouette of the randomly generated fractal mountain looks like a time series. What about reversing the process? Let's start with a time series and the find the samples that are really needed.
How does it work?: Mid-point displacement
1 2 3
4 5 6
jamheras
Sticky Note
0. We start with a given time series 1. The first and last samples are connected and we check if there is any remaining point that would be above the guaranteed maximum error (if we just linearly interpolate it) 2. Since there is a least 1 point that does not pass the max. error criterion we create a mid-point displacement. The same process is applied to the left and right segment. 3, 4, 5, 6. Repeat the process until there are no points whose linear interpolation will yield an error above the maximum guaranteed error.
Would we take a better decision if we had more information?
Better observability (higher-fidelity data):
1. Sample data on-board at higher than normal sampling rate
2. Apply optimal resampling on-board
3. Transmit resampled data
[inexact + irregular] is better than [exact + regular]
jamheras
Sticky Note
Since we are limited by bandwidth; we only have partial observability on what is happening on-board. With the fractal resampling technique we can understand better what is happening on-board (even short lived events) with the same or even less bandwidth.
Would we take better decision if we had more information?
Sampled by half: 31 samples [regular + extact]
-1.5
-1
-0.5
0
0.5
1
1.5
2
2.5
Original: 61 samples
-1.5
-1
-0.5
0
0.5
1
1.5
2
2.5
This invention (1% error): 14 samples [irregular + inexact]
-1.5
-1
-0.5
0
0.5
1
1.5
2
2.5
Could we take the same decision with much less data?
– Fractal resampling offers almost the same information for less data
– Max error guaranteed anywhere in the time series
– Still allows to take the same decisions
Rosetta experiment:
– Fractal resampling for all parameters (1% error)