I tried to get the Mathematica version to do something useful, but in typical "mathematician minimalist style" it squeezed everything into one enormous, terse, but useless blob. The code on GitHub works as a demo only. Even after trying a few different syntax variations for about twenty minutes, I couldn't figure out how to feed the "ILT" function something that would spit out what I expect.
In case the original authors ever come across this article: There is a standard for writing Mathematica modules! Please take a look at some other modules available online, and see how they split the code into small, reusable functions.
Even the name is too short. In the Mathematica naming convention it should be called "InverseLaplaceTransformCME[...]" or something like that. Ideally, use the same calling convention as the built-in function, documented here: https://reference.wolfram.com/language/ref/InverseLaplaceTra...
This would allow your function to be a drop-in replacement, allowing users to switch between the symbolic and approximate versions trivially.
The researchers already did the hard work! Programmers should see this as an opportunity to thank scientists and develop their own polished version. We shouldn't expect to receive everything ready to import directly into our projects.
There must be some name for the effect seen here so regularly, which might best be described as the contrast in magnitudes of something being posted to the comments made about it.
In this case we have a fundamental contribution to mathematics that is succinctly captured in 252 words, producing a 209 word complaint essentially about whitespace.
I think the commenter went through the effort of trying to use this and faced obstacles while doing so. They have a much more valid reason to complain here than drive-by comments of the sort you're describing, which are superficial observations about how the code/package is presented.
So there are some droids like you describe; but these are not the ones you're looking for.
I can reformat whitespace, if I care to read through a piece of mis-aligned code, but that's not at all the criticism here. You missed my point entirely.
This is like putting wooden wagon wheels on an F1 race car, and then complaining that people should appreciate the sleek lines of the car and stop commenting about the wheels.
The code posted on GitHub is wasting the effort and the talent that went into this algorithm. It's optimising for brevity, something utterly useless, over utility, which is essential.
The race car in this instance would be the paper, it is well presented and assuming it survives scrutiny, eternal. One particular manifestation of it for one contemporary system might better be thought of as the garnish on the free salad bowl placed next to the car
> The race car in this instance would be the paper, it is well presented and assuming it survives scrutiny, eternal.
The parent is pointing out how the code is the only approachable version of the paper for many people. Making it read like mathematics renders it useless - since it only speaks to the audience that can already understand the paper.
I fall into this category. If the code had reasonable variable names and comments I could probably figure out how it works. But since it reads like the wall of LaTeX on the linked page - I can't pierce it without learning a lot of mathematics. I think that says a lot about mathematic notation.
As mathematicians, we find solace in the void of the universe. We embrace it, we suppose the existence of the empty set, and we are forever saved from the apparent infinitude of assaults against the great potential of the human condition.
Often, this kind of paradise is interrupted. Naive sets give way to ones with hyphenated names, and then some mathematicians prefer to branch before such hyphenated names, and in turn prolong the paradise that indeed continues to exist.
Some mathematicians decide to focus on the branching point, call it Paradise in its own respect, and appreciate it for what it is. Not for what it is not.
At the turn of the 19th century to the 20th century, a new kind of mathematician emerged that branched into an emerging area. This emerging area specialises in a form of logic popularly referred to as coding or programming. Very quickly, it turned out that this is a more social area of mathematics (though pure mathematics is indefinitely married to the concept of the mathematical seminar).
However, it is implicitly social. The sociability is in the form of subtle comments, rants about Git conventions, and indeed about the use of whitespace.
This convention somehow seems to have a purpose. That you can write a story through codes and comments, without resorting to the actual English language, but rather the through implementing the English Universal Psuedocode Nonconvention.
And though there shall always be trouble in Paradise, the beauty of this strange new branch is that we can read it as prose—not unlike the great French mathematicians used to do—and have a paradise simply in the bliss of effort.
Of course, of the dark art of machine code, and indeed of the strange validity of mathematics in general, we shall not say much, but simply read jokes about Yoda and try not to argue with a program that beyond all expectation continues to give the correct mapping to the codomain for all individual function inputs from its domain that we can enumerate in a reasonable amount of time.
This is possibly the most fabulously pompous faulty comparison I've ever read. To think that somewhere, someone might place the work of the common developer alongside that of a mathematician is hilarious. Sure, it's possible to say development has a mathematical basis, much as you could say operating a speedboat or a kite has. Our trade has about as much in common with mathematics as a convent has to a brothel
Would you be able to provide one or two examples of what you consider well written Mathematica modules? Or provide a reference on Mathematica programming that you liked?
Naming functions with acronyms instead of names is totally unrelated to familiarity with math notation. You don't even use acronyms in papers, you use greek symbols and letters written in special fonts.
A little ELI5 for those who haven't had Laplace transforms at school, from someone who only had a Laplace 101 course, so for what's it worth: Laplace transforms allow you to convert differential equations into easier equations, and back: the differentials and integrals become multiplications and divisions. So you can take a differential equation, transform it into the Laplace domain, manipulate it, and convert it back. And that's cool because differential equations tend to appear everywhere, for instance to model springs, electrical circuits with caps and coils, the surface of a soap bubble in a metal rod, etc. A sibling is the z-transform, which is like the digital version. This one is used for instance to design digital audio filters. I'm sure some math wizards here can elaborate and correct me.
I don't have any real understanding over the Laplace transform, but I understand Fourier transform well enough that it makes sense to me. Back then, I saw an claim that Laplace transform is a generalization of Fourer transform in the sense that it transforms a function not only to a space of frequencies and phases of sine waves, but to a larger space of parameters of exponentials. Note that the parameter space of the sine waves is subset of the (complex) parameter space of exponentials.
If you understand fourier transform well, then perhaps this viewpoint will help. The fourier transform is 'just' a change of basis, with the basis being the sinusoidal functions. Why these? Well, because if we take a look at the discrete fourier transform, the matrix that changes the basis is both unitary [which is has to be as a change of basis] and vandermonde. So we can think of it as both
a 'change of basis' and as a 'evaluation of polynomial'. This is where most of the power of the fourier transform comes from.
Similarly, the laplace transform is also a change of basis. But the basis it chooses is a very special one --- it's the eigenvectors of the differential operator. Note that
d/dx(e^(ax)) = ae^ax
So e^(ax) literally an eigenvector of `(d/dx)`. And as we all know, going to
the eigenbasis of a given operator/linear transform/matrix makes it easier
to manipulate. The laplace transform is a change of basis that digonalizes
the differential operator. This makes it easy to solve differentials.
One also can think of transforms as the eigenfunctions of the continuous part of the spectrum of a differential operator. In differential equations (DE) theory, a well posed DE has a structure (the DE itself), a domain and enough independent boundary conditions.
Fourier transform will show up for an harmonic oscilator in the whole real line with incoming and outgoing wave boundary conditions, while Laplace will show up when working on semi-infinite interval whit initial conditions and proper convergence at infinity.
These are the most common, but not the only transforms one can build. There are also Melin and Hankel transforms, and by playing with the operator, the domain and the boundary conditions, we can construct the adequate transform for each given problem.
Spectral theory of DE’s is such a beautiful topic.
> This idea of using exponentials in linear differential equations is almost as great as the invention of logarithms, in which multiplication is replaced by addition. Here differentiation is replaced by multiplication. . . . See how simple it is! Differential equations are immediately converted, by sight, into mere algebraic equations
Is the Laplace eigenbasis considered a basis in a relaxed sense that permits non-orthogonality or does the notion of orthogonality itself change to counteract the apparent redundancy?
Yes. The Fourier transform characterizes on a circle (usually the unit circle in the complex domain) at different frequencies. It is properly defined for periodic signals.
The Laplace transform takes any exponential spiral in the complex plane, and reduces to the Fourier transform if you only care about the unit circle.
I appreciate that doesn’t make things clearer unless you already have some understanding of integral transforms in the complex plane (in which case, you probably know this already). However, I have never met a simple intuitive explanation of the Laplace transform, - and actually no meaningful explanation that doesn’t involve integrals.
With Fourier you can analyze oscillatory characteristics of function (frequency and phase). With Laplace you can also analyze amplification/attenuation.
The Fourier transform, well, transforms a time-based phenomenon such as an alternating current sine wave into a frequency spectrum where you can observe the frequency spectrum components of the signal. A pure nice sine becomes a spike (delta function) located at a specific frequency. Music, as we observe it through our ears and can view it on an oscilloscope becomes moving spikes (lots of them :) in the frequency spectrum where the "amplitude" at a given frequency relates to the "amount" of that frequency in the music.
The behaviour of a filter is much easier to describe in the frequency spectral domain than it would be in the time domain.
Now to the direct current (DC) view. This cannot be handled by the Fourier transform -- at least the DC-part of the signal cannot be transformed to the frequency domain. As shown in the article, there were "steps", "ramps" and such. A typical scenario would be to describe what happens in your amplifier during startup, to describe how electrical circuits are behaving during startup before reaching the "running" state.
The Laplace transform will handle these types of scenarios, and can thus be used to study (or describe) systems during other types of transitions than the "steady state" when you are up and running.
Regarding filters, the Fourier transform describes things going on at the unit circle, while the Laplace transform can be used to study both the interior and exterior of the plane. In this sense, creating filters relates to locate "poles" and "zeros" in the plane (amplification and attenuation) which can be observed on the unit circle as the behaviour on periodic signals.
I recall from some undergraduate classes I took (in mechanical engineering) that you can "convert" between a Laplace transform and a Fourier transform by saying that s = i * omega. This has some appeal from a purely algebraic standpoint but doesn't seem rigorous to me. For one, the limits of the integral aren't the same. How valid is this? I always assumed it was an approximation.
Is the Laplace transform in some sense similar to a one-sided/semi-infinite Fourier transform, provided that change of variables is made?
Years ago in a complex analysis class I worked out the contour integration for a few Fourier transforms as I recall, but I've had no similar training for the Laplace transform and have forgotten many details.
Yeah, they're only equal when your f(t) = 0 for all t < 0. Otherwise they can be quite different, because the integral limits differ. In an undergrad engineering context you're often evaluating the response of a system to some input and it's common to have "at t=0 the switch is closed/mass is released/etc." where the function is assumed to have been 0 previously.
For a little more intuition about why you can do that replacement: laplace is a representation of a system as a sum of sinusoids * exponentials, which are your two axis in the laplace plane. Frequency on the iw axis and exponential on the a axis. If you think of that replacement as s = iw + a | a = 0, you'll see the exponential terms go away and you're left with just the sinusoidal parts:
integrated over time, which is your fourier transform subject to the condition above. It's just the laplace transform along the Y axis, or, the frequency response at steady state when not growing/decaying exponentially.
I believe you're confusing discrete and continuous time, Fourier in continuous time is Laplace evaluated along s = jw or the vertical axis and not e^jw, the unit circle
Please check this video presentation for a simple overview of Laplace Transform [1]. Basically Fourier Transform (FT) is a special case (subset) of Laplace Transform (LT) where the signal waveform revolves around a unit circle (real power of exponents) . Similarly, LT is basically a generalization (superset) of FT that the signal waveforms revolve off the unit circle (complex power of exponents).
The discrete FT or DFT, however, as the name clearly implied, is the discrete version of FT and similarly discrete Laplace Transform (DLT) is the discrete version of LT. The main difference is that DFT covers finite sum but DLT covers infinite sum.
The faster version of DFT (without compromising the resolution accuracy) is called FFT and it is probably the most useful and important algorithm in the 21st century! The inverse FFT is called IFFT and it was discovered around the same time of FFT. The faster version of DLT is interestingly called Chirp-Z Transform (CZT) and somehow its inverse (ICZT) discovery is at a much later date as has been reported recently [2] and also featured in HN [3]. This much later date of discovery is mainly due to the complexity of complex power exponents (pardon the pun but cannot resist).
Fun fact, CT was discovered by Lawrence Rebinar who was working at AT&T's speech processing lab (SPL) [4]. The lab is so well funded that Kernighan and Ritchie who were belong to the other lab has to scrap by the older computer of the SDL (the infamous PDP-7) where Unix was originally developed on when Multics project got canceled.
Sounds right to my (very very rusty) recollection. Laplace transforms are a magic trick that let you easily solve some kinds of differential equations.
>> Laplace transforms are a magic trick that let you easily solve some kinds of differential equations.
To mathematicians I don't think they're so much magic. When I took differential equations class it was frustrating that they went too fast for me to fully digest what was "really" going on. It didn't feel out of reach, but something I needed to look at a couple different ways but didn't have time (or the internet) to do so. Think I'm gonna checkout 3blue1brown after this - he can probably close that gap for me.
> Think I'm gonna checkout 3blue1brown after this - he can probably close that gap for me.
You might like this lecture from MIT's OCW: [1]. It's my favorite source for motivating the Laplace transform. It's a bit difficult to make this concept "simple", and this resource assumes that you already have some familiarity with the following concepts: infinite series, power series, radius of convergence, and (indefinite) integration.
The tl;dw is that the Laplace transform is a generalization of a power series.
Something also worth mentioning is that it isn't just useful when dealing with differentiation / anti-differentiation but also when dealing with convolution.
I've had some problems understanding the Laplace transform. Maybe somebody here can point me towards some material.
I have an interest understanding how IIR filters are designed, and I always get stuck at this part in DSP books. The Laplace transform is used, but as well as finding the mathematics difficut I don't really understand why it is being used at all. I think it is trying to replicate the effect of an analog circuit?
I like practical examples for learning about math-heavy stuff and I came to greater understanding while looking into imaging, specifically how DCT (discrete cosine transformation) works.
You learn how an image is dissected into two matrices (or one complex matrix) containing amplitudes and phases of respective frequencies. A good start for me was playing around with openCV and reading about JPEG (uses DCT).
Why transform an image in the first place? Because you can just set the highest frequencies to zero without influencing the image in real space too much. This effect is leveraged by classical JPEG compression, you just delete data not that important for the image. Being able to analyze, filter, change frequencies in a signal has a lot of other applications.
There is a ton of literature about DCT because its widespread application. A few google searches lead to good learning material. Fourier and in general LaPlace transformations are a little different, but far easier to understand after seeing an example of their application in my opinion.
This also touches the topic of the article. The problem is that transforming between real space and spectral space results in rounding errors. The article describes a new approach to minimize these.
You can describe a circuit by it's time domain behavior. Or you can describe the circuit by it's frequency domain behavior. Both are valid and congruent.
The thing is a lot of questions are easy to answer in the frequency domain.
For instance, you want to know if a circuit with feedback will oscillate. Hard to answer using time domain equations. But in the frequency domain there is a simple constraint. If for all frequencies where the the gain is greater than one the phase shift is less than 180 degrees, circuit won't oscillate. This is obviously rather useful.
Also a point with a lot of 'books' the authors get caught up in describing how something is done that they never explain why something is done. I've found often the answer is simple yet opaque and frustratingly never talked about.
This. I remember having adequate cursory knowledge of Fourier Transform to the point of understanding the value of FFT algorithms, but the Laplace Transform was explained like hell so I failed my robotics classes.
If you have an electronic circuit, you can model each element with a differential equation. E.g. voltage across a capacitor is modelled as the integral of current, voltage across an inductor is dI/dt.
This is a useful fact for a simple circuit in a classroom, but the differential equations for any circuit with more than a few components soon become insanely complex.
With the Laplace transform you (more or less) replace an integral with 1/s and a differential with s, plus some constants derived from the component values.
Then you can simplify for s, and use the Inverse Laplace Transform to convert the final expression in s into an expression in t.
You have now solved an insanely complex differential equation with some basic algebra, and your final expression in t - with component constants, and some exponentials that appear after the inverse transform - accurately models how the circuit responds over time.
There's also a related fairly simple trick for converting the s-domain representation into a frequency/phase plot which tells you how the circuit operates in the frequency domain.
And another related fairly simple trick for converting the continuous s-domain into the z-domain for DSP calculations over a sampled time series.
Because the same theory also applies in other domains - spring/mass systems, and so on - you can use the same technique there too.
Yes this very good. As it the point that restating the problem in a different domain is a very common way to make a problem tractable.
Examples
Converting numbers to logs allows you to multiply and divide by mere addition and subtraction. If you wonder why RF engineers represent power in db this is why.
Mapping an equation in terms of forces integrated over a path to one using vectors and energy.
Yes, one way of designing an IIR filter is to design the continuous-time version and convert it to discrete. There are other (usually better) ways, but if you've already got a good understanding of continuous-time filter behaviour, it's a usable on-ramp.
Control systems, signal filters (noise attenuation), modeling epidemics, modeling queues, modeling reliability of repairable systems, modeling recurrent events(such as failures), renewal processes, modeling inventory plans, probability in general (because of the connection with moment generating function) ...
I got taught them in a course on linear systems which was a pre-requisite course to control theory.
Lots of electrical circuits, mechanical systems and electro-mechanical systems can be modelled using laplace transforms if they are linear systems.
I did an electrical and electronic engineering degree and we got to skip the tedious differential equation solving lectures that the mechanical, civil and chemical engineers had to attend because of Monsieur Laplace.
From my time at the uni, I wish we'd had a proper course that covered Laplace (and the important special cases, e.g. Fourier and z) transforms properly. Instead, the coverage was interspersed to general math courses and to the courses that needed to apply them.
rollulus gave a good summary of Laplace transforms and what they do. For some more context, they appear regularly in applied probability (e.g. finance, insurance, physical models including dams). A typical problem is dealing with sums of non-negative random variables. Let's say you want the distribution of n independent copies of a non-negative random variable with distribution function F. The hard way is the n-fold convolution or essentially evaluating an n-dimensional integral. The easy way is using the Laplace transform of F and simply raising it to the power of n.
The result isn't always invertible analytically, but you can almost always invert it numerically and this is why techniques like the one outlined in the paper are so important.
This is a fantastic post and I thoroughly recommend reading it and the 2019 paper that summarises all their work for several reasons:
1. Very clear exposition of previous work and their own.
2. Clear evaluation metrics.
3. They've even made it easy for you to replicate their work and results.
I've never understood the use of the Laplace transform. Perhaps that's due to my mathematical exposure (theoretical qualitative analysis of pdes). Since the Laplace transform lacks the duality of the Fourier transform, it doesn't seem to have a place in research mathematics. But I probably think of a dozen fundamental uses of the Fourier transform, from Bourgain spaces to evaluating oscillatory integrals. And if you're working on some manifold with curvature then you generally need to be familiar with the eigenfunctions of the Laplacian on that manifold... not the basis of the Laplace transform.
I also know a bit of signal processing\numerical analysis, and I'm not familiar with any practical uses of the Laplace transform there. I don't believe it's used in the numerical solution of pdes or odes, whereas spectral methods are a huge area of study and (until recently, I think) were used in the GFS weather model. And most time series analysis tools either apply the Fourier transform or bail out of this approach and use statistical tools.
My version of Greenspun's 10th rule goes: any sufficiently complex program includes an FFT.
Can anyone help me out here? Is there a problem/theorem the Laplace transform solves/proves which the Fourier transform doesn't?
For a good explanation on Laplace Transform please check the video presentation link that I've provided in my other comments.
As for the Laplace Transform, it is mainly use in control system applications where the input/output include transient/damping/forcing signal waveforms (on and off unit circle) not only clean steady state signal waveforms (on unit circle). This paper provides a good overview of the sample usages of a Laplace Transform in Electrical and Electronics Engineering [1].
If what you meant by the duality Fourier Transform as FFT/IFFT, Laplace Transform has the equivalent in the form of Chirp-Z Transform (CZT) and recently discovered inverse CZT (ICZT), and the original HN discussions link of the discovery is also provided in my other comments. For CZT/ICZT potential useful application please check the other/older HN topic comments in [2].
Perhaps we should just wait and watch for the torrent of patent filings on this CZT/ICZT topic if the claim of ICZT is really true and feasible.
The Fourier transform is a line cut out of the Laplace transform, and the Laplace transform is the analytic continuation of the Fourier transform. So you should not be surprised to see the Fourier transform show up in all applications, because there is an FFT but no FLT.
Remember in linear algebra how you spent most of the time learning about eigenvalues and eigenvectors, and particularly how to "diagonalize" a matrix `A` into `A=PDP^-1`? Doing this makes `A` easier to work with, so problems that include `A` are often easier to solve if you replace `A` with `PDP^-1`.
The laplace transform is the same thing, but instead of matrices, it works on derivatives. Equations involving `d/dt` are often made easier to work with by instead using `s`.
This is true of the Fourier transform, too. The duality properties of the FT make it a nice theoretical tool. And the existence of the FFT make it a nice practical tool, too.
Interesting, but I'm not seeing what this has to do with Fourier or Laplace. Mathematically, the statement is: if x is L^{\infty} and h is L^1, then y=x*h is L^{\infty}. The proof of this is very routine... just write down the definition of the convolution and stare at it.
Now it's true that the Fourier transform is not well-behaved on L^{\infty}. Any book on harmonic analysis will discuss this point in great lengths. But the context in which these questions are discussed (singular integral operators on spaces like BMO, which contains L^{\infty}) doesn't tend to include the Laplace transform in a useful manner.
Someday I'll have to properly learn harmonic analysis to sort out these questions for myself.
But seriously, do you have a favorite example where the Laplace transform is used to prove a theorem or used in practice to solve a problem?
I'm familiar with the undergraduate differential equations examples. But there are plenty of things taught at the undergraduate level which are tractable and helpful to build intuition but either a) aren't important from a research perspective or b) aren't used in practice. The Fourier transform has both.
All the time in AC circuits. Especially for anything RF-related. It's vastly, vastly easier to work in the (complex) frequency domain. Antenna and filter design are pretty much all done in the s-domain.
This is an interesting promotion of an applied math result. From their promotional material it looks promising, though the unusual promotional approach makes me worry.
The actual paper is at https://www.sciencedirect.com/science/article/pii/S016653161.... This is a pretty obscure journal. The paper is pretty "soft" -- lots of numerical testing of their approach vs. other well-known approaches and not very much theoretical analysis of convergence rates or such.
The main claim seems to be that their approach has better numerical properties for discontinuous functions and that it can be effectively implemented to high order using double precision arithmetic.
Why do you say they are not associated? They seem part of a research group of the Technical University of Budapest, looking at the papers affiliations.
Trying to put together how a new numerical method works scouring for papers with different nomenclatures, different sets of authors, different implementations etc. is often a huge pain. I wish these "landing pages" became a standard, or that a standard repository for them became available. Something like, this is our technique, these are the relevant papers, and here is some demo code.
I'd rather have an applied paper have tests, comparisons and source code than lots of theory and being hard to reproduce because "implementation details" don't appear in the paper.
Thanks the authors for putting the code out there for anyone to reproduce and not fall into the unreproduceable "science" that is plaguing us at the moment[1].
Inverting the Laplace transform is a central problem in computational physics, since it connects imaginary-time results (easier to obtain numerically) to real-time response.
Over the years a number of approaches have been developed for the inverse Laplace transform, such as MaxEnt, GIFT and many others.
I would love to see how this new approach fares against those.
I feel like this is what Tim Berners-Lee imagined the World Wide Web to be: sharing knowledge and research with interactive media and hypertext, instead of printed papers. It found new applications outside academia, but this site is probably close to the original idea.
Thank you for the exposure and feedback! We really appreciate it.
About the code. We have added comments and simple running examples to the code on github. Hopefully that helps make the code more accessible to everyone.
About the contribution. Classic numerical inverse Laplace
transformation methods work in some cases but fail in others, while the CME method always gives a good approximation at low computational cost. We recommend it for general use when you just want to invert a
function numerically without spending effort to figure out what methods might be applicable.
This story is six days old. Don't expect a lot of replies to your comment to come, but know that your comment & more importantly the improvements (and ofc the result!) get appreciated.
Not really my, well, domain (sorry), so my only contribution is that there's a spelling error in the dropdown: it refers to the Heaviside step function as the 'Heavyside' function.
Just spitballing here, about an application of the Laplace transform. We have a product that allows the users to use machine learning in a semi-automatized way, without deeply understanding hyperparameter optimization, model testing, selection and evaluation and such.
There was some talk about supporting the prediction of time-series data. I have absolutely no knowledge of how time-series data should be pre-processed and what kind of algorithms are common or applicable in general. (I'm not in charge of the R&D of the data-science-y features) However, it seems like Laplace transform as a pre-processing step ticks a lot of the checkboxes. As a superset of Fourier, it supports periodic changes in time series, and being about exponentials, it also allows for growth (or decreasing) over time, allowing to transform a time series to data that is more applicable to classical ML algorithms.
Is Laplace transform actually used for such usecases?
IDK, but Fourier, and specifically the more specialized, the DCT certainly is.
Part of the reason for this is because the algorithms to go from discrete data points into a wave form are fairly well known and fast.
DCT is the foundation for most Lossy encoding formats. Using it for time series data makes a lot of sense, especially if you are optimizing for storage space.
Here's a little ELI5 about the Laplace and inverse Laplace transform, and why the inverse transfrom is fiendishly difficult, and therefore why this result is extraordinarily important.
Imagine you win the Megabucks lottery. The win is one hundred million dollars. You go to claim your money, but you are told you can choose between the full amount given in monthly payments over 20 years, or a lump sum. But the lump sum is not the full $100MM, it is the present value of the monthly payments discounted at a rate of 5%. To discount an amount received 10 years from now at the 5% rate, you simply divide by 1.05 ^ 10, which is very close to exp(0.05 x 10). If you actually calculate this present value using the exponential function, you say that you use "continuously compounded rates".
So, for any stream of future cashflows one can calculate the present value by multiplying the cashflows with appropriate discount factors (of the type exp(-r t)) and adding them up. For different discounting rates r you obtain different present values. This present value as a function of r is the Laplace transform of the cashflow stream as a function of t.
The inverse Laplace transform is solving the riddle: if I tell you the present value (PV) of some cashflows for any (positive) discount rate you want, can you calculate the cashflows?
Why is this a difficult problem? Because it is "ill-conditioned". Imagine the following two cashflow streams: in the first you get $1MM every year for the next 10 years and another $1MM one hundred years from now. In the second you also get $1MM annually for the first ten years but the last $1MM is 101 years from now. For a zero discount rate the value of both cashstreams is $11MM. For a 5% they are both around $9MM and different by about $300, which is about 0.003%. For any discount rate the PV's will be very very close.
In some cases "in real life" this closeness could be below machine precision level. If someone gives you 2 sets of inputs where their Laplace transforms are different by less than the machine precision levels for all values of the discount rate, then there is no hope to tell them apart knowing only their trasforms only, at least not if you don't use some multiple precision libraries.
That should give you an intuition why the inverse Laplace transform is nasty. All hope is not lost though. First of all, in a typical application the Laplace transform of a function is known in closed (analytical) form, so you can actually use multiple precision libraries if you so wish. I have seen cases where people were using precision of 2000 digits in Mathematica for this. It's slow as hell, but it gets the job done.
Separately, you are free to calculate the Laplace transform at any "discount rate", including complex values. If you are smart about how to choose these values, you can come up with good recipes for the Laplace transform.
For hundreds of years now, the general wisdom was that various inverse numerical Laplace transform algorithms have strengths and weaknesses, but no single one is universally good.
Maybe this one will be, and if so it will be indeed revolutionary.
One of the big breakthroughs is Machine Learning/Neural Networks (NN) is to use the derivative of the error to update the weights of the network (backpropagation). Thinking if CME could be used to avoid local min/max in some way, to speed up the training process.
It seems to be advanced maths but I wonder why the designer (he already know in advance the desired form of the function in order to give him a desirable property (in both cases being more smoothed / continuous / centered)) does not draw graphically the desired function and let a software solve, find automatically the best approximation of the function?
EDIT: well it seems to be a general function approximator so my point doesn't apply (but still apply for the new activation functions in machine learning)
In case the original authors ever come across this article: There is a standard for writing Mathematica modules! Please take a look at some other modules available online, and see how they split the code into small, reusable functions.
Even the name is too short. In the Mathematica naming convention it should be called "InverseLaplaceTransformCME[...]" or something like that. Ideally, use the same calling convention as the built-in function, documented here: https://reference.wolfram.com/language/ref/InverseLaplaceTra...
This would allow your function to be a drop-in replacement, allowing users to switch between the symbolic and approximate versions trivially.
You may even want to contact Wolfram Research! They just implemented a new "Asymptotics" module that includes approximate inverse Laplace transforms as a feature. See: https://reference.wolfram.com/language/guide/Asymptotics.htm...
They might add your approach into the 12.2 release, which would mean that many thousands of people could automatically benefit from your hard work!