SciELO - Scientific Electronic Library Online

 
vol.51 número1Sobre familias analíticas de mapeos conformes índice de autoresíndice de materiabúsqueda de artículos
Home Pagelista alfabética de revistas  

Servicios Personalizados

Revista

Articulo

Indicadores

Links relacionados

  • En proceso de indezaciónCitado por Google
  • No hay articulos similaresSimilares en SciELO
  • En proceso de indezaciónSimilares en Google

Compartir


Revista Colombiana de Matemáticas

versión impresa ISSN 0034-7426

Rev.colomb.mat. vol.51 no.1 Bogotá ene./jun. 2017

https://doi.org/10.15446/recolma.v51n1.66831 

Originals articles

Ball convergence theorem for a Steffensen-type third-order method

Teorema de convergencia en bola para un método de tercer orden de tipo Steffensen

Ioannis K. Argyris1 

Santhosh George2 

1 Department of mathematical sciences, Cameron University, Lawton, Oklahoma 73505, USA, e-mail: iargyros@cameron.edu

2 Department of mathematical and computational sciences, NIT Karnataka, Karnataka, India-575 025. e-mail: sgeorge@nitk.ac.in


ABSTRACT:

We present a local convergence analysis for a family of Steffensen-type third-order methods in order to approximate a solution of a nonlinear equation. We use hypothesis up to the first derivative in contrast to earlier studies such as [2,4,6,7,8,9,10,11,12,13,14,15,17,16,18,19,20,21,22,23,24,25,26,27,28] using hypotheses up to the fourth derivative. This way the applicability of these methods is extended under weaker hypothesis. Moreover the radius of convergence and computable error bounds on the distances involved are also given in this study. Numerical examples are also presented in this study.

Key words and phrases: Steffensen's method; Newton's method; order of convergence; local convergence

RESUMEN:

Presentamos un análisis de convergencia local para una familia de métodos de tercer orden de tipo Steffensen con el fin de aproximar una solución de una ecuación no lineal. Utilizamos hipótesis hasta la primera derivada en contraste con estudios anteriores como [2,4,6,7,8,9,10,11,12,13,14,15,17,16,18,19,20,21,22,23,24,25,26,27,28] utilizando hipótesis hasta la cuarta derivada. De esta manera, la aplicabilidad de estos métodos se extiende bajo hipótesis más débiles. Además, el radio de convergencia y los límites de error computables en las distancias involucradas también se dan en este estudio. También se presentan ejemplos numéricos en este estudio.

Palabras y frases clave: Método de Steffensen; Método de Newton; Orden de convergencia; Convergencia local

1. Introduction

In this study we are concerned with the problem of approximating a locally unique solution x* of equation

where is a nonlinear function, D is a convex subset of S and S is or . Newton-like methods are famous for finding solution of (1), these methods are usually studied based on: semi-local and local convergence. The semi-local convergence matter is, based on the information around an initial point, to give conditions ensuring the convergence of the iterative procedure; while the local one is, based on the information around a solution, to find estimates of the radii of convergence balls [3,5,20,21,22,24,26].

Third order methods such as Euler's, Halley's, super Halley's, Chebyshev's [2-28] require the evaluation of the second derivative F" at each step, which in general is very expensive. That is why many authors have used higher order multipoint methods [2-28Α. In this paper, we study the local convergence of third order Steffensen-type method defined for each n = 0,1, 2, ... by

where x0 is an initial point. Method [2] was studied in [18] under hypotheses reaching upto the fourth derivative of function F.

Other single and multi-point methods can be found in [1,3,20,25] and the references therein. The local convergence of the preceding methods has been shown under hypotheses up to the fourth derivative (or even higher). These hypotheses restrict the applicability of these methods. As a motivational example, let us define function f on D = by

Choose x* = 1. We have that

Then, obviously, function f''' is unbounded on D. In the present paper we only use hypotheses on the first Frechet derivative. This way we expand the applicability of method (2).

The rest of the paper is organized as follows: Section 2 contains the local convergence analysis of methods (2). The numerical examples are presented in the concluding Section 3.

2. Local convergence

We present the local convergence analysis of method (2) in this section. Let stand for the open and closed balls in S, respectively, with center and of radius p > 0.

Let L0 > 0, L > 0, M0 > 0, M > 0 and α > 0 be given parameters. It is convenient for the local convergence analysis of method(2) that follows to define some functions and parameters. Define function on the interval

By

and parameters

Notice that if:

We have that g(rA) = 0, and

Define function g1 on the interval by

and set

We get that. It follows from the Intermediate Value Theorem that function h1 has zeros in the interval (0,r0). Denote by r 1 the smallest such zero. Moreover, define function on the interval by

and set

Then, we have that and . Hence, function h has a smallest zero rp. Furthermore, define function on the interval by

and set

Then, we have and Hence, function h 2 has a smallest zero denoted by r 2 . Set

Then, we get that for each

And

Next, using the above notation we present the local convergence analysis of method (2).

Theorem 2.1. Let F : be a differentiable function. Suppose that there existand M > 0 such that for each x, y Є D the following hold

and

where r is defined by (3). Then, the sequencegenerated by method (2) foris well defined, remains infor each n = 0,1, 2, … and converges to x*. Moreover, the following estimates hold for each n = 0,1, 2, …,

And

where the "g" functions are defined above Theorem 2.1. Furthermore, if that there existssuch that, then the limit point x* is the only solution of equation F(x) = 0 in .

Proof. We shall use induction to show estimates (13) and (14). Using the hypothesis , the definition of r and (8) we get that

It follows from (15) and the Banach Lemma on invertible functions [3,5,19,20,22,23] that F'(x0) is invertible and

We can write by (7) that

Then, we have by (10), (11) and (17) that

and

where we used for each. We also have by (18) and (12) that

so. Next we shall show that is invertible. Using the definition of r0, (8) and (18), we get in turn that

It follows from (20) that is invertible and

Hence, y0 is well defined by the first substep of method (2) for n = 0. Then, we can write

where The first expression at the right hand side of (22), using (9) and (16) gives

Using (7), (9), (18) and (19) the numerator of the second expression in (22) gives

Then, it follows from (4), (16), (21), (22)-(24) that

which shows (13) for n = 0 and. Next, we shall show that F(x 0 ) - F(y 0 ) is invertible. Using the definition of function p, x0 ≠ x*, (5), (8), (13) (for n = 0), we get in turn that

It follows from (25) that F(x0) -F(y0) is invertible and

Hence, x1 is well defined by the second step of method (2) for n = 0. We can also write that

Where

and

Then, using (6), (16), (21), (23) and (26)-(29), we get that

which shows (14) for n = 0 and . By simply replacing x0,y0,x1 by xk, yk, xk+1 in the preceding estimates we arrive at estimates (13) and (14). Using the estimate we deduce that and .

To show the uniqueness part, let for some y*with F(y*) = 0. Using (7) we get that

It follows from (30) and the Banach Lemma on invertible functions that Q is invertible. Finally, from the identity 0 = F(x*) - F(y*) = Q(x* - y*) , we conclude that x* = y*.

Remark 2.2. (1) In view of (9) and the estimate

condition (11) can be dropped and M can be replaced by

(2) The results obtained here can be used for operators F satisfying autonomous differential equations [3] of the form

where P is a continuous operator. Then, since F'(x*) = P(F(x*)) = P(0), we can apply the results without actually knowing x* . For example, let F(x) = ex - 1. Then, we can choose: P(x) = x + 1.

(3) The radius r A was shown by us to be the convergence radius of Newton's method [1-5]

under the conditions (9) and (10). It follows from the definition of r that the convergence radius r of the method (2) cannot be larger than the convergence radius of the second order Newton's method (31) if L 0 M 0 L. Even in the case L 0 M 0 < L, still r may be smaller than r A .

As already noted in [3,5] is at least as large as the convergence ball given by Rheinboldt [25]

In particular, for L 0 < L we have that

and

That is our convergence ball r A is at most three times larger than Rhein-boldt's. The same value for r R was given by Traub (26).

(4) It is worth noticing that method (2) is not changing when we use the conditions of Theorem 2.1 instead of the stronger conditions used in [2,4,9-28]. Moreover, we can compute the computational order of convergence (COC) defined by

or the approximate computational order of convergence

This way we obtain in practice the order of convergence in a way that avoids the bounds involving estimates using estimates higher than the first Frechet derivative of operator F.

3. Numerical Examples

We present numerical examples in this section.

Example 3.1. Let D =. Define function f of D by

Then we have for x* =0 that L0 = L = M = M0 = 1, α =1. The parameters are given in Table 1 and error estimates are given in Table 2.

Table 1 

Tablea 2 

Example 3.2. Let D = [-1,1]. Define function f of D by

Using (34) and x* = 0, we get that L 0 = e - 1 <L = M = M0 = e,α =1. The parameters are given in Table 3 and error estimates are given in Table 4.

Table 3 

Table 4 

Example 3.3. Returning back to the motivational example at the introduction of this study, we have L 0 = L = 96.662907, M = 2, M0 = 3M, α = 1. The parameters are given in Table 5 and error estimates are given in Table 6.

Table 5 

References

[1] S. Amat, S. Busquier, and S. Plaza, Dynamics of the King's and Jarratt iterations, Aequationes. Math. 69 (2005), 212-213. [ Links ]

[2] S. Amat, M. A. Hernandez, and N. Romero, A modified Chebyshev's iterative method with at least sixth order of convergence, Appl. Math. Comput. 206 (2008), no. 1, 164-174. [ Links ]

[3] I. K. Argyros, Convergence and Application of Newton-type Iterations, Springer, 2008. [ Links ]

[4] I. K. Argyros, D. Chen, and Q. Quian, The Jarratt method in Banach space setting, J. Comput. Appl. Math. 51 (1994), 103-106. [ Links ]

[5] I. K. Argyros and Said Hilout, Computational methods in nonlinear Analysis, World Scientific Publ. Co., 2013, New Jersey, USA. [ Links ]

[6] B. Neta, C. Chun and M. Scott, Basins of attraction for optimal eighth order methods to find simple roots of nonlinear equations, Appl. Math. Comput . 227 (2014), 567-592. [ Links ]

[7] V. Candela and A. Marquina, Recurrence relations for rational cubic methods I: The Halley method, Computing 44 (1990), 169-184. [ Links ]

[8] J. Chen, Some new iterative methods with three-order convergence, Appl. Math. Comput . 181 (2006), 1519-1522. [ Links ]

[9] A. Cordero, J. L. Hueso, E. Martinez, and J. R. Torregrossa, Steffensen type methods for solving non-linear equations, J. Comput. Appl. Math . 236 (2012), 3058-3064. [ Links ]

[10] A. Cordero , J. Maimo, J. Torregrosa, M. P. Vassileva, and P. Vindel, Chaos in King's iterative family, Appl. Math. Lett. 26 (2013), 842-848. [ Links ]

[11] A. Cordero , A. Magrenán, C. Quemada, and J. R. Torregrosa, Stability study of eight-order iterative methods for solving nonlinear equations, J. Comput. Appl. Math 291 (2016), 348-357. [ Links ]

[12] A. Cordero andJ. Torregrosa , Variants of Newton's method using fifth order quadrature formulas, Appl. Math. Comput . 190 (2007), 686-698. [ Links ]

[13] J. A. Ezquerro and M. A. Hernáandez, A uniparametric Halley-type iteration with free second derivative, Int. J.Pure and Appl. Math. 6 (2003), no. 1, 99-110. [ Links ]

[14] ______, New iterations of R-order four with reduced computational cost, BIT Numer. Math. 49 (2009), 325-342. [ Links ]

[15] M. Frontini and E. Sormani, Some variants of Newton's method with third order convergence, Appl. Math. Comput . 140 (2003), 419-426. [ Links ]

[16] M. A. Hernandez and M. A. Salanova, Sufficient conditions for semilocal convergence of a fourth order multipoint iterative method for solving equations in Banach spaces, Southwest J. Pure Appl. Math (1999), no. 1, 29-40. [ Links ]

[17] M. A. Hernandez J. M. Gutierrez, Recurrence relations for the super-Halley method, Computers Math. Applic. 36 (1998), no. 7, 1-8. [ Links ]

[18] J. P. Jaiswal, A new third-order derivative free method for solving nonlinear equations, Universal J. Appl. Math. 1, 2 (2013), 131-135. [ Links ]

[19] R. F. King, A family of fourth-order methods for nonlinear equations, SIAM. J. Numer. Anal. 10 (1973), 876-879. [ Links ]

[20] A. K. Maheshwari, A fourth order iterative method for solving nonlinear equations, Appl. Math. Comput . 211 (2009), 283-391. [ Links ]

[21] S. K. Parhi and D. K. Gupta, Semi-local convergence of a Stirling-like method in Banach spaces, Int. J. Comput. Methods 7 (2010), no. 02, 215-228. [ Links ]

[22] M. S. Petkovic, B. Neta , L. Petkovic, and J. Džunič, Multipoint methods for solving nonlinear equations, Elsevier, 2013. [ Links ]

[23] F. A. Potra and V. Ptak, Nondiscrete induction and iterative processes, Research Notes in Mathematics, Vol. 103, Pitman Publ., Boston, MA, 1984. [ Links ]

[24] L. B. Rall, Computational solution of nonlinear operator equations, Robert E. Krieger, New York (1979). [ Links ]

[25] H. Ren, Q. Wu, and W. Bi, New variants of Jarratt method with sixth-order convergence, Numer. Algorithms 52 (2009), no. 4, 585-603. [ Links ]

[26] W. C. Rheinboldt, An adaptive continuation process for solving systems of nonlinear equations, In: Mathematical models and numerical methods (A.N. Tikhonov et al. eds.) pub. 3, no. 19, 129-142, Banach Center, Warsaw Poland. [ Links ]

[27] J. F. Traub, Iterative methods for the solution of equations, Prentice Hall Englewood Cliffs, New Jersey, USA, 1964. [ Links ]

[28] S. Weerakoon and T. G. I. Fernando, A variant of Newton's method with accelerated third-order convergence, Appl. Math. Lett. 13 (2000), 87-93. [ Links ]

2010 Mathematics Subject Classification. 65D10, 65D99.

Received: April 2016; Accepted: September 2016

Creative Commons License This is an open-access article distributed under the terms of the Creative Commons Attribution License