New Algorithms to Compute Hypersingular Integrals with Rapidly Oscillating Kernels
Article Main Content
The paper is dedicated to the study of two new algorithms for approximating hypersingular integrals with rapidly oscillating kernels. The methods use an interpolatory procedure at zeros of orthogonal polynomials. Estimates of the error are given, as well as of the amplification factors. Some numerical examples show the coherence of the theoretical results with the numerical ones.
Introduction
In [1] the authors propose an approximate method for evaluating Cauchy singular integral with rapidly oscillating kernel
and compare this method with some other numerical approximations available in previous literature.
The present paper is devoted to the issue of construction of two new algorithms for evaluating the finite-part hypersingular integral
taking into account the results in [1].
The mathematical modeling of wave processes, electromagnetic scattering and fracture mechanics in many areas of physics and technology bring importance into the evaluation of singular and hypersingular integrals with rapidly oscillating kernel (cf. [2]–[5]), and in the last years many papers are devoted to numerical methods for approximating the integrals in (2) (see for example [6]–[8] and the references given therein). Here, we follow the idea presented first in [1] and in [9], where quadrature formulas are considered based on interpolation processes that are convergent, stable and can be implemented with small attempt.
Defined the integral (1) as Cauchy principal value
the integral (2) can be written as the derivative of (1)
see ([10]). Moreover, from Lemma 6.1, Ch. II in [11], we can write
Recalling that , finally we obtain
At first, we remember that if the functions and are Hölder continuous then, we obtain the existence of the integrals (1)–(3) (see [10]).
Very recently, in [9] the authors have presented a method for evaluating integral in (3) making use of the values of f and f’ at the first kind Chebyshev zeros, proving bounds of the error and of the amplification factor. Although the deduced formula has the drawback of using twice as many functional evaluations, it has the advantage of having recourse only to the weights of the quadrature sum proposed in [1] to approximate (1).
In the present paper we give two new algorithms to compute (3) which although use new weights, have the advantage of making use only of the values of the function at the same points, and are convergent in a weaker condition on the smoothness of the density function . Therefore, these latter formulas are useful, for instance, in quadrature method for solving hypersingular integral equations with rapidly oscillating kernel.
The paper is organized as follows: In Section 2 as well as presenting the algorithm, bounds of the error and of the amplification factor are provided; in Section 3 we present an alternative approximative method having the advantages of that ones in Section 2 about the computation and having a better convergence behavior. Finally, in Section 4 we present some numerical examples that show the coherence of the theoretical results with the numerical ones.
An Algorithm to Evaluate Integral (3)
From (4) we observe that the numerical approximation of (3) is strictly related to the quadrature of the following integrals:
and that can be approximated by using the methods in [1]. In [9] the authors have suggested the use of
to approximate .
Here, in order to establish a formula that does not depend on the samples of the derivative function , we approximate in (5) with the Lagrange interpolant polynomial of the function f instead of the Lagrange interpolant polynomial of the function . In what follow we will consider quadrature formulas of , since the integral can be treated similarly. In particular, we consider the following approximation
where the Lagrange polynomial interpolates a given function at the points zeros of the th Jacobi polynomial with respect to the exponent .
First, let us introduce some notations.
Let us denote by the modulus of smoothness of a given function , defined as
where and , (cf. [12]). Further, we denote by the usual uniform norm and by the th Lebesgue constant corresponding to the weight function .
We shall study the convergence of the sequence in (6) with to . For this purpose, we state a theorem showing the behavior of the function .
Theorem 1 Let and . Then for ,
where denotes a positive constant independent of and . □
Proof. See Theorem 3.1 in [1].
We recall the next result on the simultaneous polynomial approximation of a given function (see [13], Theorem p. 113).
Lemma 2 For every function , there exists an algebraic polynomial of degree such that
where and is a positive constant independent of and .
In the previous lemma and in what follows denotes the ordinary modulus of continuity of a given function .
Then, for the quadrature rule (6) with , the next result holds true.
Theorem 3 For every function , and , we have
and
where denotes a positive constant independent of and .
Proof. In view of Theorem 1 we can write
by using Bernstein inequality. Hence (7) follows from when .
In order to prove (8) we remark that in view of Theorem 1
Now let be the polynomial of Lemma 2. Thus
Further
having used again Bernstein inequality, Lemma 2 and for . Combining (10) and (11) with (9) we deduce (8). □
We want to emphasize that from (7) we can deduce the following bound
that provides the behavior of the weighted amplification factor.
We go on to see how to compute in (6) with . So we denote by
the m-th Chebyshev orthonormal polynomial of the first kind and let be the zeros of the orthogonal polynomial . Since
where are the fundamental Lagrange polynomials with respect to the points , we have
with
and
Thus,
and
where
Denoting by the sequence of the Chebyshev orthogonal polynomials of the second kind, we have
and
Therefore
where
and
The accurate evaluation of in (13) allows us to compute for together with
and
where
are the sine and cosine integral, respectively; and is the Euler constant. The starting values of (13) require the evaluation of the sine and cosine integrals that can be computed by some mathematical software like Mathematica [14]. Finally, we remark that (12) can be rewritten
with respect to the coefficients
which are not influenced by the value t and the oscillatory factor ω. Thus the evaluation of in (14) can be done following Clenshaw type algorithm of this kind:
We want point out that even if the quadrature is to preferred to use formula (6) in [9], because it does not use the values , from a convergence point of view it performs worse (cf. the previous theorem and Theorem 1 in [9]). This is due to the fact that uses the operator which performs worse that the operator used in [9]. Indeed, the Lagrange operator is not good for the simultaneous approximation even if we start from an optimal choice of interpolation points as in the case .
Another Algorithm to Evaluate Integral (3)
In the sequel we shall propose a new formula to compute having the advantages of about the computation and having the same convergence behavior of formula (2.6) in [1].
We introduce the polynomial interpolating a given function at the points , zeros of and at
.
Then we consider the new quadrature rule
to approximate . For it we can prove the following result.
Theorem 4. For every function and , we have
and
where denotes a positive constant independent of and .
Proof. To prove (15) and (16) we can follow the same steps to prove (7) and (8), respectively. Thus, the proof follows recalling that in view of Corollary 3.2 in [15].
We remark that the assumption of the existence of the Hölder continuous derivative besides ensuring the existence of it provides the convergence of (cf. Theorem 4). Instead, the same hypothesis on the smoothness on does not ensure the convergence of (cf. Theorem 3).
Finally, by standard computations we try
where and are the same defined before and where
Recalling that the polynomials , satisfy
we try
The evaluation of the integral in (17) allows us to compute for , together with
and .
Numerical Examples
In this Section we consider two numerical examples with the aim of showing the correspondence between the numerical results and the theoretical ones by applying the algorithms presented in the previous Sections. All the computations have been performed in double precision arithmetic.
In the first test we choose the function , so that . In this case we know the exact solution, therefore we compare this with the numerical solutions obtained using the two proposed methods. We denote by
and
the errors obtained with two different methods. In Tables I–III we show the errors of the two methods, compared with the exact solution of the integral evaluated with Mathematica package, for three different values of and for increasing values of .
m | ||||||
---|---|---|---|---|---|---|
4 | 0.1D-01 | 0.4D-03 | 0.1D-01 | 0.3D-03 | 0.9D-01 | 0.1D-02 |
8 | 0.2D-05 | 0.2D-07 | 0.5D-06 | 0.1D-07 | 0.6D-05 | 0.5D-07 |
0.1D-14 | 0.5D-15 | 0.2D-14 | 0.3D-15 | 0.1D-10 | 0.9D-14 |
m | ||||||
---|---|---|---|---|---|---|
4 | 0.8D-02 | 0.2D-03 | 0.8D-01 | 0.2D-02 | 0.8D-01 | 0.8D-03 |
8 | 0.1D-05 | 0.1D-07 | 0.5D-05 | 0.5D-07 | 0.6D-05 | 0.9D-09 |
0.1D-14 | 0.3D-15 | 0.2D-14 | 0.2D-15 | 0.3D-14 | 0.8D-15 |
m | ||||||
---|---|---|---|---|---|---|
4 | 0.2D-01 | 0.7D-03 | 0.8D-01 | 0.2D-02 | 0.9D-01 | 0.6D-03 |
8 | 0.3D-05 | 0.3D-07 | 0.5D-05 | 0.5D-07 | 0.1D-05 | 0.2D-08 |
0.9D-15 | 0.5D-15 | 0.8D-14 | 0.4D-15 | 0.1D-13 | 0.4D-15 |
In the second example we choose the function , so that . In such case we don’t know the exact solution, and in Tables IV–VI we consider only the correct digits obtained using the two proposed methods and, as in the first example, for three different values of , and for increasing values of . Taking into account the regularity of the functions considered in the examples, Theorems (3) and (4) give the same order of convergence, as we can see looking at the results presented in Tables I–III. Better results are obtained with the second method when the function is less regular.
m | ||||||
---|---|---|---|---|---|---|
4 | 1.7 | 1.7 | 0.8 | 0.8 | -1. | −1.0 |
8 | 1.7 | 1.7 | 0.8 | 0.8 | −1.07 | −1.0 |
16 | 1.763 | 1.763 | 0.86 | 0.86 | −1.07 | −1.07 |
1.7634 | 1.7634 | 0.8641 | 0.8641 | −1.073 | −1.073 |
m | ||||||
---|---|---|---|---|---|---|
4 | 0.8 | 0.8 | 2.6 | 2.6 | 0.7 | 0.6 |
8 | 0.8 | 0.883 | 2.69 | 2.6 | 0.7 | 0.7 |
16 | 0.8832 | 0.8832 | 2.692 | 2.692 | 0.7 | 0.70 |
0.88327 | 0.88327 | 2.6924 | 2.6924 | 0.7097 | 0.7097 |
m | ||||||
---|---|---|---|---|---|---|
4 | −2.6 | −2.6 | 2. | 2.6 | −0.7 | −0.57 |
8 | −2.62 | −2.62 | 2.624 | 2.62 | −0.61 | −0.6 |
16 | −2.623 | −2.623 | 2.624 | 2.624 | −0.61 | −0.61 |
−2.6234 | −2.6234 | 2.6246 | 2.6246 | −0.615 | −0.615 |
References
-
Capobianco MR, Criscuolo G. On quadrature for Cauchy principal value integrals of oscillatory functions. J Comput Appl Math. 2003;156:471–86.
Google Scholar
1
-
Chien CC, Rajiyah H, Atluri SN. On the evaluation of hypersingular integrals arising in the boundary element method for linear elasticity. Comput Mech. 1991;8:57–70.
Google Scholar
2
-
Colton D, Kress R. Inverse Acoustic and Electromagnetic Scattering Theory. New York: Springer; 1992.
Google Scholar
3
-
Gray L,Martha LF, Ingraffea AR. Hypersingular integrals in boundary element fracture analysis. Int J Numer Methods Eng. 1990;29:1135–58.
Google Scholar
4
-
Korsunsky AM. On the use of interpolative quadratures for hypersingular integrals in fracture mechanics. Proc R Soc Lond A. 2002;458:2721–33.
Google Scholar
5
-
Boykov I, Roudnev V, Boykovova A. Approximate methods for calculating singular and hypersingular integrals with rapidly oscillating kernels. Axioms. 2022;11:150.
Google Scholar
6
-
Kiang S, Fang C, Xu Z. On uniform approximations to hypersingular finite-part integrals. J Math Anal Appl. 2016;435:1210–28.
Google Scholar
7
-
Liu G, Xiang S. Clenshaw-Curtis type quadrature rule for hypersingular integrals with highly oscillatory kernels. Appl Math Comput. 2019;340:251–67.
Google Scholar
8
-
Capobianco MR, Criscuolo G. Approximate method to compute hypersingular finite-part integrals with rapidly oscillating kernels. EJ-Math. 2023;4(5):1–4.
Google Scholar
9
-
Criscuolo G. A new algorithm for Cauchy principal value and Hadamard finite-part integrals. J Comput Appl Math. 1997;78:255–75.
Google Scholar
10
-
Mikhlin SG, Prössdorf S. Singular Integral Operators. Berlin: Akademie Verlag; 1986.
Google Scholar
11
-
Ditzian Z, Totik V. Moduli of Smoothness. New York: Springer-Verlag; 1987.
Google Scholar
12
-
Gopengauz IE. On a theorem of A.F. Timan on the approximation of functions by polynomials on a finite segment. Math Zamerki. 1967;1:163–72 (in Russian). Math Notes. 1967;1:110–6.
Google Scholar
13
-
Wolfram S. Mathematica–A System for Doing Mathematics by Computer. Redwood City: Addison-Wesley; 1988.
Google Scholar
14
-
Mastroianni G. Uniform convergence of derivatives of Lagrange interpolation. J Comput Appl Math. 1992;43:37–51.
Google Scholar
15
Most read articles by the same author(s)
-
Maria Rosaria Capobianco,
Giuliana Criscuolo,
Approximate Method to Compute Hypersingular Finite-Part Integrals with Rapidly Oscillating Kernels , European Journal of Mathematics and Statistics: Vol. 4 No. 5 (2023)