Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Derivatives and Integrals of Basic Trigonometric Functions, Study notes of Mathematics

A comprehensive study on the derivatives and integrals of basic trigonometric functions, including linear properties, fractions, exponents, radicals, logarithms, complex numbers, and calculus. It covers topics such as limits, derivatives, integration by substitution, integration by parts, partial fraction decomposition, and useful variations. The document also includes tangent and cotangent identities, reciprocal identities, pythagorean identities, sum and difference identities, and signs of trigonometric functions.

Typology: Study notes

2023/2024

Uploaded on 03/16/2024

ryan-goodridge
ryan-goodridge 🇨🇦

1 document

1 / 16

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
First year math
Formula Sheet
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff

Partial preview of the text

Download Derivatives and Integrals of Basic Trigonometric Functions and more Study notes Mathematics in PDF only on Docsity!

First year math

Formula Sheet

Contents

1 Algebra

Note to Future Self: If you find yourself needing to refer to this section, consider taking a brief break to recharge and clear your mind. Sometimes, a short pause can provide a fresh perspective and make problem- solving more effective.

1.1 Linear Properties

a + b = b + a a(bc) = b(ac) = c(ab) a(b + c) = ab + ac

1.2 Fractions

a

b c

ab c

(^) a b

c

a bc a (^) b c

ac b

a b

c d

ad ± bc bd a ± b c ± d

b ± a d ± c a ± b c

a c

b c ab ± ac a = b ± c (^) a b

(^) c d

ad bc

1.3 Exponents

an^ · am^ = an+m (an)m^ = a(n·m) (ab)n^ = an^ · bn  (^) a b

−n

b a

n

 (^) a b

n

an bn

a−n^

= an

a mn = (an) m^1

1.4 Radicals

√ nx = x 1 n

m

q √nx = mn √x

√ nab = √na · √nb

n

r a b

√ na √ bx √ nan (^) = a if a is even √ nan (^) = |a| if a is odd

1.5 Logarithms

loge x = ln x if y = logb x then by^ = x logx x = 1

logb bx^ = x logx 0 = 1 blogb^ x^ = x

logb x = loga x loga b logb (xy) = logb x + logb y

logb

x y

= logb x − logb y

1.6 Complex Numbers

i = j = k =

1.6.1 Basic Operations z 1 ± z 2 = (a 1 ± a 2 ) + (b 1 ± b 2 )i

z 1 ·z 2 = (a 1 a 2 −b 1 b 2 )+(a 1 b 2 +a 2 b 1 )i

z 1 z 2

a 1 + b 1 i a 2 + b 2 i z 1 z 2

a 1 a 2 + b 1 b 2 a^22 + b^22

a 2 b 1 − a 1 b 2 a^22 + b^22 i

1 a + bi

a − bi √ a^2 + b^2

1.6.2 Complex Conjugate

z ¯ = a − bi

Modulus

|z| =

a^2 + b^2

1.6.3 Polar Form z = r(cos θ + i sin θ)

Conversion a = r cos θ, b = r sin θ

Modulus

r = |z|

2 Calculus

2.1 Limits

2.1.1 Limit Definition

If lim x→a−

f (x) = lim x→a+

f (x), then lim x→a f (x) exists.

2.1.2 Basic Limits

lim x→c c = c

lim x→c x = c

2.1.3 Limits of Polynomials

lim x→c (axn) = a(c)n

lim x→±∞ xn^ =

∞ if n is even ±∞ if n is odd

2.1.4 Limits of Rational Functions

lim x→c

x

lim x→c

xn^

= ±∞ for odd n

lim x→c

xn^

= 0 for even n

2.1.5 Limits of Exponential and Logarithmic Functions

lim x→∞ ex^ = ∞

lim x→−∞ ex^ = 0

lim x→ 0 ln(x) = −∞

lim x→∞ ln(x) = ∞

2.1.6 Limits Involving Trigonometric Func- tions

lim x→ 0

sin(x) x

lim x→ 0

cos(x) − 1 x

lim x→ 0

tan(x) x

lim x→∞ sin

x

2.1.7 Limits Involving Piece-wise Functions

lim x→c− f (x) = lim x→c+ f (x) for continuity

2.1.8 Limits at Infinity

lim x→∞

xp^

= 0 for p > 0

lim x→∞ xp^ = ∞ for p > 0

lim x→∞

ln(x) xp^

= 0 for p > 0

2.1.9 Limit Properties

lim x→c [a · f (x)] = a ·

h lim x→c f (x)

i

lim x→c (f (x) ± g(x)) = lim x→c f (x) ± lim x→c g(x)

lim x→c (f (x) · g(x)) = lim x→c f (x) · lim x→c g(x)

lim x→c

f (x) g(x)

lim x→c f (x) lim x→c g(x)

, if lim x→c g(x) ̸= 0

lim x→c [f (x)]a^ =

h lim x→c f (x)

ia

lim x→c

p f (x) =

q lim x→c f (x)

2.1.10 Squeeze Theorem Limits

If g(x) ≤ f (x) ≤ h(x) for all x in a neighborhood of c (except possibly at c itself), and lim x→c g(x) = lim x→c h(x) = L, then lim x→c f (x) = L.

2.3.3 Common Integrals Z k dx = kx + C Z xn^ dx =

xn+ n + 1

+ C

Z

x−n^ dx =

x−n+ −n + 1

  • C, n ̸= 1 Z 1 x dx = ln |x| + C

Z

ax + b

dx =

a

ln |ax + b| + C

Z ex^ dx = ex^ + C

Z ln(x) dx = x ln(x) − x + C

Z sin(x) dx = − cos(x) + C

2.3.4 Integration Techniques

  1. Integration by Substitution (u-Substitution): Integration by substitution is a technique used to simplify integrals by substituting a function and its derivative into the integral. The basic idea is to choose a suitable substitution such that the resulting integral becomes easier to evaluate. Example: (^) Z x cos(x^2 ) dx

Let u = x^2 , then du = 2x dx → dx = (^21) x du, and the integral becomes: Z x cos(u)

2 x

du =

Z

cos(u) du =

sin(u) + C =

sin(x^2 ) + C

  1. Integration by Parts: Integration by parts is a technique used to integrate the product of two functions. It is based on the formula: (^) Z

u dv = uv −

Z

v du

where u and v are differentiable functions of x. Example: (^) Z x sin(x) dx

Let u = x and dv = sin(x) dx, then du = dx and v = − cos(x), and the integral becomes:

−x cos(x) +

Z

cos(x) dx = −x cos(x) + sin(x) + C

  1. Trigonometric substitution: Trigonometric substitution is a powerful technique used to simplify integrals involving square roots of quadratic expressions. The key idea is to replace the quadratic expression with a trigonometric function, allowing us to perform the integration in a more manageable form.

a^2 − x^2 substitution: x = a sin(θ)

x^2 − a^2 substitution: x = a sec(θ)

x^2 + a^2 substitution: x = a tan(θ)

Each substitution corresponds to a different form of the integrand. After substitution, trigonometric identities are used to simplify the integrand, allowing for easier integration. Example: Consider the integral

R 1

√ 4 −x^2 dx. To evaluate this integral, we use the^

a^2 − x^2 substitution:

x = 2 sin(θ)

dx = 2 cos(θ) dθ Substituting into the integral: (^) Z 1 p 4 − (2 sin(θ))^2

· 2 cos(θ) dθ

Z

q 4 − 4 sin^2 (θ)

· 2 cos(θ) dθ

Z

p 4 cos^2 (θ)

· 2 cos(θ) dθ

Z

2 cos(θ) 2 cos(θ)

Z

= θ + C

= arcsin

 (^) x 2

+ C

  1. Partial Fraction Decomposition Partial fraction decomposition is a method used to decompose a rational function into simpler fractions. This technique is particularly useful when integrating rational functions, as it allows us to break down complex integrals into more manageable parts. The process of partial fraction decomposition involves expressing a rational function as a sum of simpler fractions, typically of the form (^) (xA−r) or (^) (Axx−+rB)n , where A and B are constants, r is a root of the denominator polynomial, and n is the multiplicity of the root. The general form of partial fraction decomposition for a rational function P Q^ ((xx)) , where P (x) and Q(x) are

polynomials and deg(P ) < deg(Q), is:

P (x) Q(x)

A 1

(x − r 1 )

A 2

(x − r 2 )

An (x − rn)

B 1 x + C 1 (x^2 + p 1 x + q 1 )m^1

where r 1 , r 2 ,... , rn are distinct roots of Q(x) and (x^2 + p 1 x + q 1 )m^1 represents a repeated quadratic factor in the denominator. Once the rational function has been decomposed into simpler fractions, we can integrate each term individually. This often results in integrals that are easier to evaluate compared to the original integral. Example: Consider the integral

R 1

x^2 −x dx. We can perform partial fraction decomposition as follows: 1 x^2 − x

x(x − 1) We want to express (^) x(x^1 −1) in the form Ax + (^) xB− 1. Multiplying both sides by x(x − 1), we get:

1 = A(x − 1) + Bx Expanding and equating coefficients, we find:

1 = Ax − A + Bx Comparing coefficients, we have:

A + B = 0

(for the x terms)

3 Trigonometry

3.1 Definition of the Trig Functions

eiθ^ = cos θ + i sin θ

eθ^ = cosh θ + sinh θ

sin θ =

eiθ^ − e−iθ 2 i

cos θ =

eiθ^ + e−iθ 2

tan θ =

eiθ^ − e−iθ i (eiθ^ + e−iθ^ )

csc θ =

2 i eiθ^ − e−iθ

sec θ =

eiθ^ + e−iθ

cot θ =

i

eiθ^ + e−iθ^

eiθ^ − e−iθ

sinh θ =

eθ^ − e−θ 2

cosh θ =

eθ^ + e−θ 2

tanh θ =

eθ^ − e−θ eθ^ + e−θ

csch θ =

eθ^ − e−θ

sech θ =

eθ^ + e−θ

coth θ = eθ^ + e−θ eθ^ − e−θ

arcsin θ = −i ln

iθ +

p 1 − θ^2

arccos θ = −i ln

θ + i

p 1 − θ^2

arctan θ =

i 2

ln

1 + iθ 1 − iθ

arccsc θ = −i ln

θ

  • i

r 1 θ^2

arcsec θ = −i ln

θ

r 1 θ^2

arccot θ =

i 2

ln

θ + i θ − i

arcsinh θ = ln

θ +

p θ^2 + 1

arccosh θ = ln

θ +

p θ^2 − 1

arctanh θ =

ln

1 + θ 1 − θ

arccsch θ = ln

θ

r 1 θ^2

arcsech θ = ln

θ

r 1 θ^2

arccoth θ =

ln

θ + 1 θ − 1

3.2 Useful Variations

sinc θ = sin θ θ cosc θ =

cos θ θ tanc θ =

tan θ θ

versin θ = 1 − cos θ

hav θ =

1 − cos θ 2

= sin^2

θ 2

exsec θ =

cos(π + θ)

excsc θ =

sin(π + θ)

hav−^1 θ = 2 arcsin (

θ)

3.3 Identities

3.3.1 Co-function Identities

sin θ = cos

 (^) π 2

  • θ

sin θ = − sin (π + θ)

sin θ = − cos

3 π 2

  • θ

sin θ = sin (2π + θ)

tan θ = − cot

 (^) π 2

  • θ

3.3.2 Tangent and Cotan- gent Identities

tan θ = sin θ cos θ

cot θ =

cos θ sin θ

3.3.3 Reciprocal Identities

csc θ =

sin θ sec θ =

cos θ cot θ =

tan θ

3.3.4 Pythagorean Identities

1 = sin^2 θ + cos^2 θ sec^2 θ = tan^2 θ + 1 csc^2 θ = cot^2 θ + 1

3.3.5 Double Angle Identi- ties sin(2θ) = 2 sin θ cos θ cos(2θ) = cos^2 θ−sin^2 θ = 2 cos^2 θ−1 = 1−2 sin^2 θ

tan(2θ) =

2 tan θ 1 − tan^2 θ

3.3.6 Half Angle Identities

sin^2

θ 2

(1 − cos (2θ))

cos^2

θ 2

(1 + cos (2θ))

tan

θ 2

sin θ 1 + cos θ

tan^2

θ 2

1 − cos (2θ) 1 + cos (2θ)

sec^2

θ 2

1 + cos θ 1 − cos θ

csc^2

θ 2

1 + cos θ 1 − cos θ

cot^2

θ 2

1 + cos θ 1 − cos θ

3.3.7 Sum and Difference Identities

sin(α ± β) = sin(α) cos(β) ± cos(α) sin(β) cos(α ± β) = cos(α) cos(β) ± sin(α) sin(β)

tan(α ± β) =

tan(α) ± tan(β) (1 ∓ tan(α) tan(β))

3.3.8 Product to Sum Identities

sin α sin β =

[cos (α − β) − cos (α + β)]

cos α cos β =

[cos (α − β) + cos (α + β)]

sin α cos β =

[sin (α + β) + sin (α − β)]

cos α sin β =

[sin (α + β) − sin (α − β)]

3.3.9 Sum to Product Identities

sin α + sin β = 2 sin

α + β 2

cos

α − β 2

sin α − sin β = 2 cos

α + β 2

sin

α − β 2

cos α + cos β = 2 cos

α + β 2

cos

α − β 2

cos α − cos β = −2 sin

α + β 2

sin

α − β 2

3.4 Derivatives of Basic Trigonometric Functions

d dx

(sin(x)) = cos(x)

d dx

(cos(x)) = − sin(x)

d dx

(tan(x)) = sec^2 (x)

d dx

(csc(x)) = − csc(x) cot(x)

d dx

(sec(x)) = sec(x) tan(x)

d dx

(cot(x)) = − csc^2 (x)

d dx

(arcsin(x)) =

1 − x^2

d dx

(arccos(x)) = −

1 − x^2

d dx

(arctan(x)) =

1 + x^2

d dx

(arccsc(x)) = −

|x|

x^2 − 1

d dx (arcsec(x)) =

|x|

x^2 − 1

d dx (arccot(x)) = −

1 + x^2

4 Linear Algebra

4.1 Vectors

4.1.1 Vector Notation and Operations

u⃗ =

u 1 u 2... un

v⃗ =

v 2 .. . vn

v⃗ ⊤^ = u

v⃗ ˆ= v⃗ v |⃗ |

v|⃗ | =

q v^21 + v^22 +... + v n^2

v⃗ ±u⃗ =

v 1 ± u 1 v 2 ± u 2 .. . vn ± un

k⃗v =

kv 1 kv 2 .. . kvn

v⃗ ·u⃗ = v 1 u 1 + v 2 u 2 +... + vnun v⃗ ·u⃗ = v|⃗ | · |u⃗ | · cos(θ) v⃗ ×u⃗ =v⃗ ×u⃗⃗

v ×^ =

0 −vz vy vz 0 −vx −vy vx 0

4.1.2 Vector Properties

(A⃗ ⊤)⊤^ = A⃗

( A⃗ + B⃗ )⊤^ = A⃗ ⊤^ + B⃗ ⊤

(k⃗A)⊤^ = k⃗A⊤

(AB⃗ )⊤^ = B⃗ ⊤^ A⃗⊤

( A⃗ −^1 )⊤^ = (A⃗ ⊤)−^1

A⃗ ×⊤^ = − A⃗×

v⃗ ·u⃗ =u⃗ ·v⃗⃗

v · u(⃗ +w⃗ ) =v⃗ ·u⃗ +v⃗ ·w⃗

(k⃗v ) ·u⃗ = kv(⃗ ·u⃗ )

v⃗ ⊥u⃗ ⇐⇒ v⃗ ·u⃗ = 0

v⃗ ×u⃗ = −u(⃗ ×v⃗ )

v⃗ × u(⃗ +w⃗ ) = (v⃗ ×u⃗ ) + (v⃗ ×w⃗ )

(k⃗v ) ×u⃗ = kv(⃗ ×u⃗ ) =v⃗ × (k⃗u )

v⃗ · v(⃗ ×u⃗ ) =u⃗ · v(⃗ ×u⃗ ) = 0

i × i = 0 i × j = k i × k = −j j × i = −k j × j = 0 j × k = i k × i = j k × j = −i k × k = 0 The eigenvalues of a skew-symmetric matrix are purely imaginary or zero. If λ is an eigenvalue of a skew-symmetric matrix, then λ = 0 or λ = ±bi for some real number b. The determinant of a skew-symmetric matrix is a power of -1 times the product of its eigenvalues. If λ 1 , λ 2 ,... , λn are the eigenvalues of a skew-symmetric matrix, then

det(A⃗×) = (−1)nλ 1 λ 2... λn

4.2 Matrices

4.2.1 Matrix Notation

A = A =

a 11 a 12 · · · a 1 n a 21 a 22 · · · a 2 n .. .

am 1 am 2 · · · amn

4.2.2 Special Matrices

  • Identity Matrix: In =
  • Zero Matrix: (^0) m×n =

4.2.3 Matrix Operations

A ± B =

a 11 ± b 11 · · · a 1 n ± b 1 n .. .

am 1 ± bm 1 · · · amn ± bmn

A ± b =

a 11 ± b · · · a 1 n ± b .. .

am 1 ± b · · · amn ± b

kA =

ka 11 · · · ka 1 n .. .

kam 1 · · · kamn

AB =

X^ n

i=

A(row, 1) · B(1, column) +...

A(row, n) · B(n, column)

a b c d

e f g h

ae + bg af + bh ce + dg cf + dh

Matrix Transpose:

A⊤

A =

a 11 a 12 · · · a 1 m a 21 a 22 · · · a 2 m .. .

an 1 an 2 · · · amn

AT^ =

a 11 a 21 · · · an 1 a 12 a 22 · · · an 2 .. .

a 1 m a 2 m · · · anm

Matrix Determinant:

det(A) or |A|

|A| =

a b c d e f g h i |A| = aei + bf g + cdh − ceg − bdi − af h Matrix Inverse:

A−^1 =

det(A) · adj(A)

Enforce symmetry: 1 2

A + A⊤

Cholesky Decomposition: √ A = LLT

For the diagonal elements of L, compute

Lii =

vu u tAii −

iX− 1

k=

L^2 ik

For the off-diagonal elements of L, compute, for i > j:

Lij =

Ljj

(Aij −

jX− 1

k=

LikLjk)

4.3 Matrix Properties

Matrix Rank: rank(A)

Mathematically, if A is an m × n matrix, then:

  • If m ≥ n, the rank of A is equal to the maxi- mum number of linearly independent columns.
  • If m < n, the rank of A is equal to the maxi- mum number of linearly independent rows.

Matrix Norm:

∥A∥ =

v u u t

X^ m

i=

X^ n

j=

|aij |^2

Matrix Trace:

tr(A) =

X^ n

i=

aii

Adjugate

Cij = (−1)i+j^ · det(Mij )

adj(A) = [Cij]T

where M = A where row i and column j are removed.

Orthogonality:

A⊤A = I or AA⊤^ = I

4.4 Eigenvalues and Eigenvectors

4.4.1 Definition

For a square matrix A, an eigenvalue λ and its corre- sponding eigenvector v satisfy:

Av = λv

  1. Convergence and Optimality:
    • The Projection method guarantees convergence to the nearest symmetric positive semidefinite matrix.
    • Depending on the chosen norm and the specific problem requirements, the resulting matrix may not be the unique closest solution but is often satisfactory for practical purposes.

4.6.2 Regularization Techniques

  1. Regularization Methods for Positive Semidefinite Matrices:
    • Regularization techniques are commonly employed to find the nearest symmetric positive semidef- inite (SPSD) matrix to a given matrix.
    • These methods involve adding a regularization term to the original matrix to ensure positive semidefiniteness.
  2. Algorithmic Steps:

(a) Regularization: Add a small positive constant multiple of the identity matrix to the given matrix A. A ← A + ϵI where ϵ > 0 is a small regularization parameter. (b) Ensure Symmetry: Ensure the resulting matrix is symmetric by setting it equal to its own transpose. A ←

(A + AT^ )

(c) Eigenvalue Decomposition: Perform the eigenvalue decomposition of the regularized matrix to obtain A = UΣUT^ , where U is a unitary matrix and Σ is a diagonal matrix of eigenvalues.

A = UΣUT

(d) Clip Negative Eigenvalues: Replace any negative eigenvalues in Σ with zero to ensure positive semidefiniteness. Σii ← max(Σii, 0) for i = 1, 2 ,... , n, where n is the size of Σ. (e) Reconstruct the Matrix: Reconstruct the nearest symmetric positive semidefinite matrix B.

B = UΣUT

  1. Convergence and Optimality:
    • Regularization techniques also ensure convergence to the nearest symmetric positive semidefinite matrix.
    • The specific choice of regularization parameter ϵ affects the optimality and behavior of the resulting matrix.