CNR Institute for Applied Mathematics “Mauro Picone” (IAC), Italy
* Corresponding author
University of Naples Federico II, Italy

Article Main Content

The paper is dedicated to the study of two new algorithms for approximating hypersingular integrals with rapidly oscillating kernels. The methods use an interpolatory procedure at zeros of orthogonal polynomials. Estimates of the error are given, as well as of the amplification factors. Some numerical examples show the coherence of the theoretical results with the numerical ones.

Introduction

In [1] the authors propose an approximate method for evaluating Cauchy singular integral with rapidly oscillating kernel

I ω ( f ; t ) : = 1 1 f ( x ) x t e i ω x d x , ω > 0 , 1 < t < 1 ,

and compare this method with some other numerical approximations available in previous literature.

The present paper is devoted to the issue of construction of two new algorithms for evaluating the finite-part hypersingular integral

J ω ( f ; t ) : = 1 1 f ( x ) ( x t ) 2 e i ω x d x , ω > 0 , 1 < t < 1 ,

taking into account the results in [1].

The mathematical modeling of wave processes, electromagnetic scattering and fracture mechanics in many areas of physics and technology bring importance into the evaluation of singular and hypersingular integrals with rapidly oscillating kernel (cf. [2]–[5]), and in the last years many papers are devoted to numerical methods for approximating the integrals in (2) (see for example [6]–[8] and the references given therein). Here, we follow the idea presented first in [1] and in [9], where quadrature formulas are considered based on interpolation processes that are convergent, stable and can be implemented with small attempt.

Defined the integral (1) as Cauchy principal value

I ω ( f ; t ) = lim ε 0 + ( 1 t ε + t + ε 1 ) f ( x ) x t e i ω x d x , ω > 0 , 1 < t < 1 ,

the integral (2) can be written as the derivative of (1)

J ω ( f ; t ) = d d t I ω ( f ; t ) , ω > 0 , 1 < t < 1 ,

see ([10]). Moreover, from Lemma 6.1, Ch. II in [11], we can write

J ω ( f ; t ) = 1 1 d d x ( f ( x ) e i ω x ) x t d x e i ω f ( 1 ) 1 t e i ω f ( 1 ) 1 + t =

= 1 1 f ( x ) e i ω x x t d x + i ω 1 1 f ( x ) e i ω x x t d x e i ω f ( 1 ) 1 t e i ω f ( 1 ) 1 + t , ω > 0 , 1 < t < 1.

Recalling that eiωx=cosωx+isinωx, finally we obtain

J ω ( f ; t ) = 1 1 cos ω x ( f ( x ) + i ω f ( x ) ) x t d x +

1 1 sin ω x ( f ( x ) + i ω f ( x ) ) x t d x e i ω f ( 1 ) 1 t e i ω f ( 1 ) 1 + t , ω > 0 , 1 < t < 1.

At first, we remember that if the functions f and f are Hölder continuous then, we obtain the existence of the integrals (1)(3) (see [10]).

Very recently, in [9] the authors have presented a method for evaluating integral in (3) making use of the values of f and f’ at the first kind Chebyshev zeros, proving bounds of the error and of the amplification factor. Although the deduced formula has the drawback of using twice as many functional evaluations, it has the advantage of having recourse only to the weights of the quadrature sum proposed in [1] to approximate (1).

In the present paper we give two new algorithms to compute (3) which although use new weights, have the advantage of making use only of the values of the function f at the same points, and are convergent in a weaker condition on the smoothness of the density function f. Therefore, these latter formulas are useful, for instance, in quadrature method for solving hypersingular integral equations with rapidly oscillating kernel.

The paper is organized as follows: In Section 2 as well as presenting the algorithm, bounds of the error and of the amplification factor are provided; in Section 3 we present an alternative approximative method having the advantages of that ones in Section 2 about the computation and having a better convergence behavior. Finally, in Section 4 we present some numerical examples that show the coherence of the theoretical results with the numerical ones.

An Algorithm to Evaluate Integral (3)

From (4) we observe that the numerical approximation of (3) is strictly related to the quadrature of the following integrals:

I S ω ( f ; t ) : = 1 1 f ( x ) x t sin ω x d x , I C ω ( f ; t ) : = 1 1 f ( x ) x t cos ω x d x ,

and ISω(f;t),ICω(f;t) that can be approximated by using the methods in [1]. In [9] the authors have suggested the use of

I ^ S , m ω ( v α , β ; f ; t ) : = 1 1 m ( v α , β ; f ; x ) x t sin ω x d x

to approximate ISω(f;t).

Here, in order to establish a formula that does not depend on the samples of the derivative function f, we approximate f in (5) with the Lagrange interpolant polynomial m of the function f instead of the Lagrange interpolant polynomial m of the function f. In what follow we will consider quadrature formulas of ISω(f;t), since the integral ICω(f;t) can be treated similarly. In particular, we consider the following approximation

I S , m ω ( v α , β ; f ; t ) : = 1 1 m ( v α , β ; f ; x ) x t sin ω x d x ,

where the Lagrange polynomial m(vα,β;g) interpolates a given function g at the points xm,kα,β,k=1,,m zeros of the mth Jacobi polynomial pmα,β,mN with respect to the exponent α,β1.

First, let us introduce some notations.

Let us denote by ωφ(f;δ) the modulus of smoothness of a given function g, defined as

ω φ : = Sup h δ max | x | 1 | h φ g ( x ) | ,

where φ(x)=1x2 and hφg(x)=g(x+h/2φ(x))g(xh/2φ(x)), (cf. [12]). Further, we denote by g=max|x|1|g(x)| the usual uniform norm and by Λmα,β,mN the mth Lebesgue constant corresponding to the weight function vα,β(x)=(1x)α(1+x)β,α,β1,|x|1.

We shall study the convergence of the sequence IS,mω(v1/2,1/2;f;t)=IS,mω(f;t) in (6) with α=β=1/2 to ISω(f;t). For this purpose, we state a theorem showing the behavior of the function ISω(f;t).

Theorem 1 Let fC1 and ω0. Then for |t|<1,

| I S ω ( f ; t ) | c log e 1 t 2 { ( 1 + ω ) | f | + 0 1 ω φ ( f ; δ ) δ d δ } ,

where c denotes a positive constant independent of t,f and ω. □

Proof. See Theorem 3.1 in [1].

We recall the next result on the simultaneous polynomial approximation of a given function g (see [13], Theorem p. 113).

Lemma 2 For every function gCk, there exists an algebraic polynomial qm of degree m4k+5 such that

| g ( i ) ( x ) q m ( i ) ( x ) | c ( 1 x 2 m ) k i ω ( f ( k ) ; 1 x 2 m ) , i = 0 , 1 , , k ,

where |x|1 and c is a positive constant independent of m,g and x.

In the previous lemma and in what follows ω(g;.) denotes the ordinary modulus of continuity of a given function g.

Then, for the quadrature rule (6) with α=β=1/2, the next result holds true.

Theorem 3 For every function fCk, k0 and ω0, we have

| I S , m ω ( f ; t ) | c log e 1 t 2 m 2 log m ( 2 + ω + log m ) f ,

and

| I S ω ( f ; t ) I S , m ω ( f ; t ) | c log e 1 t 2 { ω ( f ( k ) ; 1 m ) m k 2 log m ( 2 + ω + log m ) + 0 1 m ω φ ( f ; δ ) δ d δ } ,

where c denotes a positive constant independent of m,f,ω and t(1,1).

Proof. In view of Theorem 1 we can write

| I S , m ω ( f ; t ) | c log e 1 t 2 { ( 1 + ω ) m ( f ) + 0 1 / m ω φ ( m ; δ ) δ d δ + 1 / m 1 ω φ ( m ; δ ) δ d δ } c log e 1 t 2 { ( 1 + ω + log m ) m ( f ) + 1 m m ( f ) } c log e 1 t 2 ( 2 + ω + log m ) m ( f ) c log e 1 t 2 m 2 ( 2 + ω + log m ) m ( f ) ,

by using Bernstein inequality. Hence (7) follows from m(f)=????(logm) when α=β=1/2.

In order to prove (8) we remark that in view of Theorem 1

| I S ω ( f ; t ) I S , m ω ( f ; t ) | = | I S ω ( f m ( f ) ; t ) |

c log e 1 t 2 { ( 1 + ω ) f m ( f ) + 0 1 ω φ ( f m ; δ ) δ d δ } .

Now let qm1 be the polynomial of Lemma 2. Thus

f m ( f ) f q m 1 + m ( q m 1 f ) c { 1 m k 1 ω ( f ( k 1 ) ; 1 m ) + 1 m k ω ( f ( k ) ; 1 m ) m } c 1 m k 2 ω ( f ( k ) ; 1 m ) l o g m .

Further

0 1 ω φ ( f m ; δ ) δ d δ 0 1 m ω φ ( f q m 1 ; δ ) δ d δ + 1 m 1 ω φ ( f q m 1 ; δ ) δ d δ + 0 1 m ω φ ( m ( q m 1 f ) ; δ ) δ d δ + 1 m 1 ω φ ( m ( q m 1 f ) ; δ ) δ d δ

c { 0 1 m ω φ ( f ; δ ) δ d δ + l o g m f q m 1 + 1 m m ( q m 1 f ) φ + l o g m m ( q m 1 f ) } c { 0 1 m ω φ ( f ; δ ) δ d δ + l o g m m k 1 ω ( f ( k ) ; 1 m ) + m 2 ( 1 + log m ) m q m 1 f }

c { 0 1 m ω φ ( f ; δ ) δ d δ + l o g m m k 2 ( 1 + log m ) ω ( f ( k ) ; 1 m ) } ,

having used again Bernstein inequality, Lemma 2 and m(f)=????(logm) for α=β=1/2. Combining (10) and (11) with (9) we deduce (8). □

We want to emphasize that from (7) we can deduce the following bound

I S , m ω ( f ) l o g 1 e 1 ( . ) 2 c m 2 l o g m ( 2 + ω + l o g m ) ,

that provides the behavior of the weighted amplification factor.

We go on to see how to compute IS,mω(f;t) in (6) with α=β=1/2. So we denote by

p 0 1 2 , 1 2 ( x ) = p 0 ( x ) = 1 π , p m 1 2 , 1 2 ( x ) = p m ( x ) = 2 π T m ( x ) , m 1

the m-th Chebyshev orthonormal polynomial of the first kind and let xm,k1/2,1/2=xm,k=cos((2k1)π/2m) be the zeros of the orthogonal polynomial Tm. Since

m ( v 1 2 , 1 2 ; f ; x ) = m ( f ; x ) = k = 1 m l m , k ( x ) f ( x m , k ) ,

where lm,k,k=1,,m are the fundamental Lagrange polynomials with respect to the points xm,k,k=1,,m, we have

I S , m ω ( f ; t ) = k = 1 m [ 1 1 l m , k ( x ) x t sin ω x d x ] f ( x m , k ) ,

with

l m , k ( x ) = i = 1 m 1 a i d d x p i ( x ) , k = 1 , , m ,

and

a i = 1 1 l m , k ( x ) p i ( x ) 1 x 2 d x = π m j = 1 m l m , k ( x m , j ) p i ( x m , j ) = π m p i ( x m , k ) , i = 1 , , m 1.

Thus,

l m , k ( x ) = 2 m i = 1 m 1 T i ( x m , k ) T i ( x ) , k = 1 , , m ,

and

I S , m ω ( f ; t ) = 2 m k = 1 m [ i = 1 m 1 T i ( x m , k ) q i ω ( t ) ] f ( x m , k ) ,

where

q i ω ( t ) = 1 1 T i ( x ) x t sin ω x d x , i = 1 , 2 , .

Denoting by {Un}nN the sequence of the Chebyshev orthogonal polynomials of the second kind, we have

T n ( x ) = n U n 1 ( x ) , n = 1 , 2 , ,

and

U 0 ( x ) 1 , U 1 ( x ) = 2 x , U n + 1 ( x ) = 2 x U n ( x ) U n 1 ( x ) , n = 1 , 2 , .

Therefore

q i ω ( t ) = i q ¯ i 1 ω ( t ) , i = 1 , 2 , ,

where

q ¯ i 1 ω ( t ) = 1 1 U i ( x ) x t sin ω x d x , i = 0 , 1 , . ,

q ¯ n + 1 ω ( t ) = 2 t q ¯ n ω ( t ) q ¯ n 1 ω ( t ) + 2 M ¯ n ω , n = 1 , 2 , ,

and

M ¯ n ω = 1 1 U n ( x ) sin ω x d x , n = 0 , 1 , .

The accurate evaluation of M¯nω in (13) allows us to compute q¯nω(t) for n=0,1,, together with

q ¯ 0 ω ( t ) = 1 1 sin ω x x t d x = sin ω x [ C i ( τ 1 ) C i ( | τ 2 | ) ] + cos ω x [ S i ( τ 1 ) + S i ( | τ 2 | ) ] ,

and

q ¯ 1 ω ( t ) = 2 1 1 x sin ω x x t d x = 2 t q ¯ 0 ω ( t ) ,

where

S i ( τ ) = 0 τ sin x x d x , C i ( τ ) = 0 τ cos x 1 x d x + l o g τ + C , τ > 0 ,

are the sine and cosine integral, respectively; τ1=ω(1t),τ2=ω(1+t) and C is the Euler constant. The starting values of (13) require the evaluation of the sine and cosine integrals that can be computed by some mathematical software like Mathematica [14]. Finally, we remark that (12) can be rewritten

I S , m ω ( f ; t ) = 2 m i = 0 m 2 c m , i ( f ) q ¯ i ω ( t ) ,

with respect to the coefficients

c m , i ( f ) = k = 1 m ( i + 1 ) T i + 1 ( x m , k ) f ( x m , k ) , i = 0 , 2 , , m 2 ,

which are not influenced by the value t and the oscillatory factor ω. Thus the evaluation of IS,mω(f;t) in (14) can be done following Clenshaw type algorithm of this kind:

z m = z m 1 = 0 , w m 1 = 0 ,

z k = 2 t z k + 1 z k + 2 + c m , k ( f ) , k = m 2 , m 3 , , 0

w k = 2 z k + 1 M ¯ k ω + w k + 1 , k = m 2 , m 3 , , 0

I S , m ω ( f ; t ) = 2 m ( q ¯ 0 ω ( t ) z 0 + w 0 ) .

We want point out that even if the quadrature IS,mω(f;t) is to preferred to use formula (6) in [9], because it does not use the values f(xm,k),k=1,,m, from a convergence point of view it performs worse (cf. the previous theorem and Theorem 1 in [9]). This is due to the fact that IS,mω(f;t) uses the operator m which performs worse that the operator m used in [9]. Indeed, the Lagrange operator is not good for the simultaneous approximation even if we start from an optimal choice of interpolation points as in the case α=β=1/2.

Another Algorithm to Evaluate Integral (3)

In the sequel we shall propose a new formula to compute ISω(f;t) having the advantages of IS,mω(f;t) about the computation and having the same convergence behavior of formula (2.6) in [1].

We introduce the polynomial m,1,1(g) interpolating a given function g at the points xm,k,k=1,,m, zeros of Tm and at xm,0=1,xm,m+1=1

m,1,1(g;x)=(1x2)m(g1(.)2;x)+[(1)m(1x)g(1)+(1+x)g(1)]Tm(x)2 .

Then we consider the new quadrature rule

I S , m , 1 , 1 ω ( f ; t ) = I S ω ( m , 1 , 1 ( f ) ; t )

to approximate ISω(f;t). For it we can prove the following result.

Theorem 4. For every function fCk,k0 and ω0, we have

| I S , m , 1 , 1 ω ( f ; t ) | c l o g e 1 t 2 l o g m ( 2 + ω + l o g m ) f ,

and

| I S ω ( f ; t ) I S , m , 1 , 1 ω ( f ; t ) | c l o g e 1 t 2 { ω ( f ( k ) ; 1 m ) m k 1 l o g m ( 2 + ω + l o g m ) + 0 1 m ω φ ( f ; δ ) δ d δ } ,

where c denotes a positive constant independent of m,f,ω and t(1,1).

Proof. To prove (15) and (16) we can follow the same steps to prove (7) and (8), respectively. Thus, the proof follows recalling that m,1,1=????(logm) in view of Corollary 3.2 in [15].

We remark that the assumption of the existence of the Hölder continuous derivative f besides ensuring the existence ofISω(f;t) it provides the convergence of IS,m,1,1ω(f;t) (cf. Theorem 4). Instead, the same hypothesis on the smoothness on f does not ensure the convergence of IS,mω(f;t) (cf. Theorem 3).

Finally, by standard computations we try

I S , m , 1 , 1 ω ( f ; t ) = 2 m k = 1 m { j = 2 m 1 T j ( x m , k ) [ 2 ( t q j ω ( t ) + M j ω ) + ( 1 t 2 ) j q ¯ j 1 ω ( t ) j t M ¯ j 1 ω j ( M ¯ j ω M j ω ) ] t q 0 ω ( t ) + x m , k [ 6 ω 2 [ ω cos ω sin ω ] 2 t q 1 ω ( t ) + ( 1 t 2 ) q ¯ 0 ω ( t ) ] } f ( x m , k ) 1 x m , k 2 + ( 1 ) m + 1 2 { q m ω ( t ) + m [ M ¯ m 1 ω ( 1 t ) q ¯ m 1 ω ( t ) ] }

f ( 1 ) + 1 2 { q m ω ( t ) + m [ M ¯ m 1 ω + ( 1 + t ) q ¯ m 1 ω ( t ) ] } f ( 1 ) ,

where q¯nω(t) and M¯nω,n=0,1,2 are the same defined before and where

q n ω ( t ) = 1 1 T n ( x ) x t sin ω x d x , n = 0 , 1 , ,

M n ω = 1 1 T n ( x ) sin ω x d x , n = 0 , 1 , .

Recalling that the polynomials Tn,nN, satisfy

T 0 ( x ) 1 , T 1 ( x ) = x , T n + 1 ( x ) = 2 x T n ( x ) T n 1 ( x ) , n = 1 , 2 , ,

we try

q n + 1 ω ( t ) = 2 t q n ω ( t ) q n 1 ω ( t ) + 2 M n ω , n = 1 , 2 , .

The evaluation of the integral Mnω in (17) allows us to compute qnω(t) for n=1,2,, together with

q0ω(t)=q¯0ω(t) and q1ω(t)=1/2q¯1ω(t).

Numerical Examples

In this Section we consider two numerical examples with the aim of showing the correspondence between the numerical results and the theoretical ones by applying the algorithms presented in the previous Sections. All the computations have been performed in double precision arithmetic.

In the first test we choose the function f(x)=exp(x), so that f(x)=exp(x). In this case we know the exact solution, therefore we compare this with the numerical solutions obtained using the two proposed methods. We denote by

ES,mω(f;t)=ISω(f;t)IS,mω(f;t) and ES,m,1,1ω(f;t)=ISω(f;t)IS,m,1,1ω(f;t),

the errors obtained with two different methods. In Tables IIII we show the errors of the two methods, compared with the exact solution of the integral evaluated with Mathematica package, for three different values of t(1,1), ω and for increasing values of mN.

m E S , m ω ( f ; 0.1 ) E S , m , 1 , 1 ω ( f ; 0.1 ) E S , m ω ( f ; 0.5 ) E S , m , 1 , 1 ω ( f ; 0.5 ) E S , m ω ( f ; 0.9 ) E S , m , 1 , 1 ω ( f ; 0.9 )
4 0.1D-01 0.4D-03 0.1D-01 0.3D-03 0.9D-01 0.1D-02
8 0.2D-05 0.2D-07 0.5D-06 0.1D-07 0.6D-05 0.5D-07
16 0.1D-14 0.5D-15 0.2D-14 0.3D-15 0.1D-10 0.9D-14
Table I. f ( x ) = e x p ( x ) , ω = 10
m E S , m ω ( f ; 0.1 ) E S , m , 1 , 1 ω ( f ; 0.1 ) E S , m ω ( f ; 0.5 ) E S , m , 1 , 1 ω ( f ; 0.5 ) E S , m ω ( f ; 0.9 ) E S , m , 1 , 1 ω ( f ; 0.9 )
4 0.8D-02 0.2D-03 0.8D-01 0.2D-02 0.8D-01 0.8D-03
8 0.1D-05 0.1D-07 0.5D-05 0.5D-07 0.6D-05 0.9D-09
16 0.1D-14 0.3D-15 0.2D-14 0.2D-15 0.3D-14 0.8D-15
Table II. f ( x ) = e x p ( x ) , ω = 50
m E S , m ω ( f ; 0.1 ) E S , m , 1 , 1 ω ( f ; 0.1 ) E S , m ω ( f ; 0.5 ) E S , m , 1 , 1 ω ( f ; 0.5 ) E S , m ω ( f ; 0.9 ) E S , m , 1 , 1 ω ( f ; 0.9 )
4 0.2D-01 0.7D-03 0.8D-01 0.2D-02 0.9D-01 0.6D-03
8 0.3D-05 0.3D-07 0.5D-05 0.5D-07 0.1D-05 0.2D-08
16 0.9D-15 0.5D-15 0.8D-14 0.4D-15 0.1D-13 0.4D-15
Table III. f ( x ) = e x p ( x ) , ω = 100

In the second example we choose the function f(x)=1/2(x1x2+arcsinx), so that f(x)=1x2. In such case we don’t know the exact solution, and in Tables IVVI we consider only the correct digits obtained using the two proposed methods and, as in the first example, for three different values of t(1,1), ω and for increasing values of mN. Taking into account the regularity of the functions considered in the examples, Theorems (3) and (4) give the same order of convergence, as we can see looking at the results presented in Tables IIII. Better results are obtained with the second method when the function is less regular.

m I S , m ω ( f ; 0.1 ) I S , m , 1 , 1 ω ( f ; 0.1 ) I S , m ω ( f ; 0.5 ) I S , m , 1 , 1 ω ( f ; 0.5 ) I S , m ω ( f ; 0.9 ) I S , m , 1 , 1 ω ( f ; 0.9 )
4 1.7 1.7 0.8 0.8 -1. −1.0
8 1.7 1.7 0.8 0.8 −1.07 −1.0
16 1.763 1.763 0.86 0.86 −1.07 −1.07
32 1.7634 1.7634 0.8641 0.8641 −1.073 −1.073
Table IV. f ( x ) = 1 / 2 ( x 1 x 2 + arcsin x ) , ω = 10
m I S , m ω ( f ; 0.1 ) I S , m , 1 , 1 ω ( f ; 0.1 ) I S , m ω ( f ; 0.5 ) I S , m , 1 , 1 ω ( f ; 0.5 ) I S , m ω ( f ; 0.9 ) I S , m , 1 , 1 ω ( f ; 0.9 )
4 0.8 0.8 2.6 2.6 0.7 0.6
8 0.8 0.883 2.69 2.6 0.7 0.7
16 0.8832 0.8832 2.692 2.692 0.7 0.70
32 0.88327 0.88327 2.6924 2.6924 0.7097 0.7097
Table V. f ( x ) = 1 / 2 ( x 1 x 2 + arcsin x ) , ω = 50
m I S , m ω ( f ; 0.1 ) I S , m , 1 , 1 ω ( f ; 0.1 ) I S , m ω ( f ; 0.5 ) I S , m , 1 , 1 ω ( f ; 0.5 ) I S , m ω ( f ; 0.9 ) I S , m , 1 , 1 ω ( f ; 0.9 )
4 −2.6 −2.6 2. 2.6 −0.7 −0.57
8 −2.62 −2.62 2.624 2.62 −0.61 −0.6
16 −2.623 −2.623 2.624 2.624 −0.61 −0.61
32 −2.6234 −2.6234 2.6246 2.6246 −0.615 −0.615
Table VI. f ( x ) = 1 / 2 ( x 1 x 2 + arcsin x ) , ω = 100

References

  1. Capobianco MR, Criscuolo G. On quadrature for Cauchy principal value integrals of oscillatory functions. J Comput Appl Math. 2003;156:471–86.
     Google Scholar
  2. Chien CC, Rajiyah H, Atluri SN. On the evaluation of hypersingular integrals arising in the boundary element method for linear elasticity. Comput Mech. 1991;8:57–70.
     Google Scholar
  3. Colton D, Kress R. Inverse Acoustic and Electromagnetic Scattering Theory. New York: Springer; 1992.
     Google Scholar
  4. Gray L,Martha LF, Ingraffea AR. Hypersingular integrals in boundary element fracture analysis. Int J Numer Methods Eng. 1990;29:1135–58.
     Google Scholar
  5. Korsunsky AM. On the use of interpolative quadratures for hypersingular integrals in fracture mechanics. Proc R Soc Lond A. 2002;458:2721–33.
     Google Scholar
  6. Boykov I, Roudnev V, Boykovova A. Approximate methods for calculating singular and hypersingular integrals with rapidly oscillating kernels. Axioms. 2022;11:150.
     Google Scholar
  7. Kiang S, Fang C, Xu Z. On uniform approximations to hypersingular finite-part integrals. J Math Anal Appl. 2016;435:1210–28.
     Google Scholar
  8. Liu G, Xiang S. Clenshaw-Curtis type quadrature rule for hypersingular integrals with highly oscillatory kernels. Appl Math Comput. 2019;340:251–67.
     Google Scholar
  9. Capobianco MR, Criscuolo G. Approximate method to compute hypersingular finite-part integrals with rapidly oscillating kernels. EJ-Math. 2023;4(5):1–4.
     Google Scholar
  10. Criscuolo G. A new algorithm for Cauchy principal value and Hadamard finite-part integrals. J Comput Appl Math. 1997;78:255–75.
     Google Scholar
  11. Mikhlin SG, Prössdorf S. Singular Integral Operators. Berlin: Akademie Verlag; 1986.
     Google Scholar
  12. Ditzian Z, Totik V. Moduli of Smoothness. New York: Springer-Verlag; 1987.
     Google Scholar
  13. Gopengauz IE. On a theorem of A.F. Timan on the approximation of functions by polynomials on a finite segment. Math Zamerki. 1967;1:163–72 (in Russian). Math Notes. 1967;1:110–6.
     Google Scholar
  14. Wolfram S. Mathematica–A System for Doing Mathematics by Computer. Redwood City: Addison-Wesley; 1988.
     Google Scholar
  15. Mastroianni G. Uniform convergence of derivatives of Lagrange interpolation. J Comput Appl Math. 1992;43:37–51.
     Google Scholar