1. Basics

1.1 Goniometric functions

For the goniometric ratios for a point $p$ on the unit circle holds: \[ \cos(\phi)=x_p~~,~~\sin(\phi)=y_p~~,~~\tan(\phi)=\frac{y_p}{x_p} \] $\sin^2(x)+\cos^2(x)=1$ and $\cos^{-2}(x)=1+\tan^2(x)$. \[ \cos(a\pm b)=\cos(a)\cos(b)\mp\sin(a)\sin(b)~~,~~ \sin(a\pm b)=\sin(a)\cos(b)\pm\cos(a)\sin(b) \] \[ \tan(a\pm b)=\frac{\tan(a)\pm\tan(b)}{1\mp\tan(a)\tan(b)} \] The sum formulas are: \begin{eqnarray*} \sin(p)+\sin(q)&=&2\sin(\frac{1}{2}(p+q))\cos(\frac{1}{2}(p-q))\\ \sin(p)-\sin(q)&=&2\cos(\frac{1}{2}(p+q))\sin(\frac{1}{2}(p-q))\\ \cos(p)+\cos(q)&=&2\cos(\frac{1}{2}(p+q))\cos(\frac{1}{2}(p-q))\\ \cos(p)-\cos(q)&=&-2\sin(\frac{1}{2}(p+q))\sin(\frac{1}{2}(p-q))\\ \end{eqnarray*} From these equations can be derived that \begin{eqnarray*} 2\cos^2(x)=1+\cos(2x) &~~,~~&2\sin^2(x)=1-\cos(2x)\\ \sin(\pi-x)=\sin(x) &~~,~~&\cos(\pi-x)=-\cos(x)\\ \sin(\frac{1}{2}\pi-x)=\cos(x)&~~,~~&\cos(\frac{1}{2}\pi-x)=\sin(x) \end{eqnarray*} Conclusions from equalities: \begin{eqnarray*} \underline{\sin(x)=\sin(a)}&~~\Rightarrow~~&x=a\pm2k\pi\mbox{ or }x=(\pi-a)\pm2k\pi,~~k\in I\hspace{-1mm}N \\ \underline{\cos(x)=\cos(a)}&~~\Rightarrow~~&x=a\pm2k\pi\mbox{ or }x=-a\pm2k\pi\\ \underline{\tan(x)=\tan(a)}&~~\Rightarrow~~&x=a\pm k\pi\mbox{ and }x\neq\frac{\pi}{2}\pm k\pi \end{eqnarray*} The following relations exist between the inverse goniometric functions: \[ \arctan(x)=\arcsin\left(\frac{x}{\sqrt{x^2+1}}\right)=\arccos\left(\frac{1}{\sqrt{x^2+1}}\right)~~,~~ \sin(\arccos(x))=\sqrt{1-x^2} \]

1.2 Hyperbolic functions

The hyperbolic functions are defined by: \[ \sinh(x)=\frac{{\rm e}^x-{\rm e}^{-x}}{2}~~,~~~\cosh(x)=\frac{{\rm e}^x+{\rm e}^{-x}}{2}~~,~~~\tanh(x)=\frac{\sinh(x)}{\cosh(x)} \] From this follows that $\cosh^2(x)-\sinh^2(x)=1$. Further holds: \[ {\rm arsinh}(x)=\ln|x+\sqrt{x^2+1}|~~~,~~~{\rm arcosh}(x)={\rm arsinh}(\sqrt{x^2-1}) \]

1.3 Calculus

The derivative of a function is defined as: \[ \frac{df}{dx}=\lim_{h\rightarrow0}\frac{f(x+h)-f(x)}{h} \] Derivatives obey the following algebraic rules: \[ d(x\pm y)=dx\pm dy~~,~~d(xy)=xdy+ydx~~,~~d\left(\frac{x}{y}\right)=\frac{ydx-xdy}{y^2} \] For the derivative of the inverse function $f^{\rm inv}(y)$, defined by $f^{\rm inv}(f(x))=x$, holds at point $P=(x,f(x))$: \[ \left(\frac{df^{\rm inv}(y)}{dy}\right)_P\cdot\left(\frac{df(x)}{dx}\right)_P=1 \] Chain rule: if $f=f(g(x))$, then holds \[ \frac{df}{dx}=\frac{df}{dg}\frac{dg}{dx} \] Further, for the derivatives of products of functions holds: \[ (f\cdot g)^{(n)}=\sum\limits_{k=0}^n{n\choose k}f^{(n-k)}\cdot g^{(k)} \] For the primitive function $F(x)$ holds: $F'(x)=f(x)$. An overview of derivatives and primitives is:

$y=f(x)$$dy/dx=f'(x)$$\int f(x)dx$
$ax^n$ $anx^{n-1}$$a(n+1)^{-1}x^{n+1}$
$1/x$ $-x^{-2}$$\ln|x|$
$a$ 0$ax$
$a^x$ $a^x\ln(a)$$a^x/\ln(a)$
${\rm e}^x$ ${\rm e}^x$${\rm e}^x$
$^a\log(x)$ $(x\ln(a))^{-1}$$(x\ln(x)-x)/\ln(a)$
$\ln(x)$ $1/x$$x\ln(x)-x$
$\sin(x)$ $\cos(x)$$-\cos(x)$
$\cos(x)$ $-\sin(x)$$\sin(x)$
$\tan(x)$ $\cos^{-2}(x)$$-\ln|\cos(x)|$
$\sin^{-1}(x)$ $-\sin^{-2}(x)\cos(x)$$\ln|\tan(\frac{1}{2} x)|$
$\sinh(x)$ $\cosh(x)$$\cosh(x)$
$\cosh(x)$ $\sinh(x)$$\sinh(x)$
$\arcsin(x)$ $1/\sqrt{1-x^2}$$x\arcsin(x)+\sqrt{1-x^2}$
$\arccos(x)$ $-1/\sqrt{1-x^2}$$x\arccos(x)-\sqrt{1-x^2}$
$\arctan(x)$ $(1+x^2)^{-1}$$x\arctan(x)-\frac{1}{2}\ln(1+x^2)$
$(a+x^2)^{-1/2}$ $-x(a+x^2)^{-3/2}$$\ln|x+\sqrt{a+x^2}|$
$(a^2-x^2)^{-1}$ $2x(a^2+x^2)^{-2}$$\displaystyle\frac{1}{2a}\ln|(a+x)/(a-x)|$

The curvature $\rho$ of a curve is given by: $\displaystyle\rho=\frac{(1+(y')^2)^{3/2}}{|y''|}$

The theorem of De 'l H\^opital: if $f(a)=0$ and $g(a)=0$, then is $\displaystyle\lim_{x\rightarrow a}\frac{f(x)}{g(x)}=\lim_{x\rightarrow a}\frac{f'(x)}{g'(x)}$

1.4 Limits

\[ \lim_{x\rightarrow0}\frac{\sin(x)}{x}=1~~,~~ \lim_{x\rightarrow0}\frac{{\rm e}^x-1}{x}=1~~,~~ \lim_{x\rightarrow0}\frac{\tan(x)}{x}=1~~,~~ \lim_{k\rightarrow0}(1+k)^{1/k}={\rm e}~~,~~ \lim_{x\rightarrow\infty}\left(1+\frac{n}{x}\right)^x={\rm e}^n \] \[ \lim_{x\downarrow0}x^a\ln(x)=0~~,~~ \lim_{x\rightarrow\infty}\frac{\ln^p(x)}{x^a}=0~~,~~ \lim_{x\rightarrow0}\frac{\ln(x+a)}{x}=a~~,~~ \lim_{x\rightarrow\infty}\frac{x^p}{a^x}=0~~\mbox{als }|a|>1. \] \[ \lim_{x\rightarrow0}\left(a^{1/x}-1\right)=\ln(a)~~,~~ \lim_{x\rightarrow0}\frac{\arcsin(x)}{x}=1~~,~~ \lim_{x\rightarrow\infty}\sqrt[x]{x}=1 \]

1.5 Complex numbers and quaternions

1.5.1 Complex numbers

The complex number $z=a+bi$ with $a$ and $b\in I\hspace{-1mm}R$. $a$ is the real part, $b$ the imaginary part of $z$. $|z|=\sqrt{a^2+b^2}$. By definition holds: $i^2=-1$. Every complex number can be written as $z=|z|\exp(i\varphi)$, with $\tan(\varphi)=b/a$. The complex conjugate of $z$ is defined as $\overline{z}=z^*:=a-bi$. Further holds: \begin{eqnarray*} (a+bi)(c+di)&=&(ac-bd)+i(ad+bc)\\ (a+bi)+(c+di)&=&a+c+i(b+d)\\ \frac{a+bi}{c+di}&=&\frac{(ac+bd)+i(bc-ad)}{c^2+d^2} \end{eqnarray*} Goniometric functions can be written as complex exponents: \begin{eqnarray*} \sin(x)&=&\frac{1}{2i}({\rm e}^{ix}-{\rm e}^{-ix})\\ \cos(x)&=&\frac{1}{2}({\rm e}^{ix}+{\rm e}^{-ix}) \end{eqnarray*} From this follows that $\cos(ix)=\cosh(x)$ and $\sin(ix)=i\sinh(x)$. Further follows from this that\newline ${\rm e}^{\pm ix}=\cos(x)\pm i\sin(x)$, so ${\rm e}^{iz}\neq0\forall z$. Also the theorem of De Moivre follows from this:\\ $(\cos(\varphi)+i\sin(\varphi))^n=\cos(n\varphi)+i\sin(n\varphi)$.

Products and quotients of complex numbers can be written as: \begin{eqnarray*} z_1\cdot z_2&=&|z_1|\cdot|z_2|(\cos(\varphi_1+\varphi_2)+i\sin(\varphi_1+\varphi_2))\\ \frac{z_1}{z_2}&=&\frac{|z_1|}{|z_2|}(\cos(\varphi_1-\varphi_2)+i\sin(\varphi_1-\varphi_2)) \end{eqnarray*} The following can be derived: \[ |z_1+z_2|\leq|z_1|+|z_2|~~,~~|z_1-z_2|\geq|~|z_1|-|z_2|~| \] And from $z=r\exp(i\theta)$ follows: $\ln(z)=\ln(r)+i\theta$, $\ln(z)=\ln(z)\pm2n\pi i$.

1.5.2 Quaternions

Quaternions are defined as: $z=a+bi+cj+dk$, with $a,b,c,d\in I\hspace{-1mm}R$ and $i^2=j^2=k^2=-1$. The products of $i,j,k$ with each other are given by $ij=-ji=k$, $jk=-kj=i$ and $ki=-ik=j$.

1.6 Geometry

1.6.1 Triangles

The sine rule is: \[ \frac{a}{\sin(\alpha)}=\frac{b}{\sin(\beta)}=\frac{c}{\sin(\gamma)} \] Here, $\alpha$ is the angle opposite to $a$, $\beta$ is opposite to $b$ and $\gamma$ opposite to $c$. The cosine rule is: $a^2=b^2+c^2-2bc\cos(\alpha)$. For each triangle holds: $\alpha+\beta+\gamma=180^\circ$.

Further holds: \[ \frac{\tan(\frac{1}{2}(\alpha+\beta))}{\tan(\frac{1}{2}(\alpha-\beta))}=\frac{a+b}{a-b} \] The surface of a triangle is given by $\frac{1}{2} ab\sin(\gamma)=\frac{1}{2} ah_a=\sqrt{s(s-a)(s-b)(s-c)}$ with $h_a$ the perpendicular on $a$ and $s=\frac{1}{2}(a+b+c)$.

1.6.2 Curves

Cycloid: if a circle with radius $a$ rolls along a straight line, the trajectory of a point on this circle has the following parameter equation: \[ x=a(t+\sin(t))~~,~~y=a(1+\cos(t)) \] Epicycloid: if a small circle with radius $a$ rolls along a big circle with radius $R$, the trajectory of a point on the small circle has the following parameter equation: \[ x=a\sin\left(\frac{R+a}{a}t\right)+(R+a)\sin(t)~~,~~ y=a\cos\left(\frac{R+a}{a}t\right)+(R+a)\cos(t) \] Hypocycloid: if a small circle with radius $a$ rolls inside a big circle with radius $R$, the trajectory of a point on the small circle has the following parameter equation: \[ x=a\sin\left(\frac{R-a}{a}t\right)+(R-a)\sin(t)~~,~~ y=-a\cos\left(\frac{R-a}{a}t\right)+(R-a)\cos(t) \] A hypocycloid with $a=R$ is called a cardioid. It has the following parameterequation in polar coordinates: $r=2a[1-\cos(\varphi)]$.

1.7 Vectors

The inner product is defined by: $\displaystyle\vec{a}\cdot\vec{b}=\sum_i a_ib_i=|\vec{a}|\cdot|\vec{b}|\cos(\varphi)$

where $\varphi$ is the angle between $\vec{a}$ and $\vec{b}$. The external product is in $ I\hspace{-1mm}R^3$ defined by: \[ \vec{a}\times\vec{b}=\left(\begin{array}{c} a_yb_z-a_zb_y\\ a_zb_x-a_xb_z\\ a_xb_y-a_yb_x\end{array}\right)= \left|\begin{array}{ccc} \vec{e}_x&\vec{e}_y&\vec{e}_z\\ a_x&a_y&a_z\\ b_x&b_y&b_z\end{array}\right| \] Further holds: $|\vec{a}\times\vec{b}|=|\vec{a}|\cdot|\vec{b}|\sin(\varphi)$, and $\vec{a}\times(\vec{b}\times\vec{c})=(\vec{a}\cdot\vec{c})\vec{b}-(\vec{a}\cdot\vec{b})\vec{c}$.

1.8 Series

1.8.1 Expansion

The Binomium of Newton is: \[ (a+b)^n=\sum_{k=0}^n{n\choose k}a^{n-k}b^k \] where $\displaystyle{n\choose k}:=\frac{n!}{k!(n-k)!}$.

By subtracting the series $\sum\limits_{k=0}^n r^k$ and $r\sum\limits_{k=0}^n r^k$ one finds: \[ \sum_{k=0}^n r^k=\frac{1-r^{n+1}}{1-r} \] and for $|r|<1$ this gives the geometric series: $\displaystyle\sum_{k=0}^\infty r^k=\frac{1}{1-r}$.

The arithmetic series is given by: $\displaystyle\sum_{n=0}^N(a+nV)=a(N+1)+\frac{1}{2} N(N+1)V$.

The expansion of a function around the point $a$ is given by the Taylor series: \[ f(x)=f(a)+(x-a)f'(a)+\frac{(x-a)^2}{2}f''(a)+\cdots+\frac{(x-a)^n}{n!}f^{(n)}(a)+R \] where the remainder is given by: \[ R_n(h)=(1-\theta)^n\frac{h^n}{n!}f^{(n+1)}(\theta h) \] and is subject to: \[ \frac{mh^{n+1}}{(n+1)!}\leq R_n(h)\leq\frac{Mh^{n+1}}{(n+1)!} \] From this one can deduce that \[ (1-x)^\alpha=\sum_{n=0}^\infty{\alpha\choose n}x^n \] One can derive that: \[ \sum_{n=1}^\infty\frac{1}{n^2}=\frac{\pi^2}{6}~,~~ \sum_{n=1}^\infty\frac{1}{n^4}=\frac{\pi^4}{90}~,~~ \sum_{n=1}^\infty\frac{1}{n^6}=\frac{\pi^6}{945} \] \[ \sum_{k=1}^nk^2=\mbox{$\frac{1}{6}$}n(n+1)(2n+1)~,~~ \sum_{n=1}^\infty\frac{(-1)^{n+1}}{n^2}=\frac{\pi^2}{12}~,~~ \sum_{n=1}^\infty\frac{(-1)^{n+1}}{n}=\ln(2) \] \[ \sum_{n=1}^\infty\frac{1}{4n^2-1}=\mbox{$\frac{1}{2}$}~,~~ \sum_{n=1}^\infty\frac{1}{(2n-1)^2}=\frac{\pi^2}{8}~,~~ \sum_{n=1}^\infty\frac{1}{(2n-1)^4}=\frac{\pi^4}{96}~,~~ \sum_{n=1}^\infty\frac{(-1)^{n+1}}{(2n-1)^3}=\frac{\pi^3}{32} \]

1.8.2 Convergence and divergence of series

If $\sum\limits_n|u_n|$ converges, $\sum\limits_n u_n$ also converges.

If $\lim\limits_{n\rightarrow\infty}u_n\neq0$ then $\sum\limits_n u_n$ is divergent.

An alternating series of which the absolute values of the terms drop monotonously to 0 is convergent (Leibniz).

If $\int_p^{\infty}f(x)dx<\infty$, then $\sum\limits_n f_n$ is convergent.

If $u_n>0~\forall n$ then is $\sum\limits_n u_n$ convergent if $\sum\limits_n\ln(u_n+1)$ is convergent.

If $u_n=c_nx^n$ the radius of convergence $\rho$ of $\sum\limits_n u_n$ is given by: $\displaystyle\frac{1}{\rho}=\lim_{n\rightarrow\infty}\sqrt[n]{|c_n|}= \lim_{n\rightarrow\infty}\left|\frac{c_{n+1}}{c_n}\right|$.

The series $\displaystyle\sum_{n=1}^\infty \frac{1}{n^p}$ is convergent if $p>1$ and divergent if $p\leq1$.

If: $\displaystyle\lim_{n\rightarrow\infty}\frac{u_n}{v_n}=p$, than the following is true: if $p>0$ than $\sum\limits_{n}u_n$ and $\sum\limits_{n}v_n$ are both divergent or both convergent, if $p=0$ holds: if $\sum\limits_{n}v_n$ is convergent, than $\sum\limits_{n}u_n$ is also convergent.

If $L$ is defined by: $\displaystyle L=\lim_{n\rightarrow\infty}\sqrt[n]{|n_n|}$, or by: $\displaystyle L=\lim_{n\rightarrow\infty}\left|\frac{u_{n+1}}{u_n}\right|$, then is $\sum\limits_{n}u_n$ divergent if $L>1$ and convergent if $L<1$.

1.8.3 Convergence and divergence of functions

\label{sec:convf} $f(x)$ is continuous in $x=a$ only if the upper - and lower limit are equal: $\lim\limits_{x\uparrow a}f(x)=\lim\limits_{x\downarrow a}f(x)$. This is written as: $f(a^-)=f(a^+)$.

If $f(x)$ is continuous in $a$ and: $\lim\limits_{x\uparrow a}f'(x)=\lim\limits_{x\downarrow a}f'(x)$, then $f(x)$ is differentiable in $x=a$.

We define: $\|f\|_W:={\rm sup}(|f(x)|~|x\in W)$, and $\lim\limits_{x\rightarrow\infty}f_n(x)=f(x)$. Than holds: $\{f_n\}$ is uniform convergent if $\lim\limits_{n\rightarrow\infty}\|f_n-f\|=0$, or: $\forall(\varepsilon>0)\exists(N)\forall(n\geq N)\|f_n-f\|<\varepsilon$.

Weierstrass' test: if $\sum\|u_n\|_W$ is convergent, than $\sum u_n$ is uniform convergent.

We define $\displaystyle S(x)=\sum_{n=N}^\infty u_n(x)$ and $\displaystyle F(y)=\int\limits_a^bf(x,y)dx:=F$. Than it can be proved that:

TheoremForDemands on $W$Then holds on $W$
rows$f_n$ continuous, $\{f_n\}$ uniform convergent$f$ is continuous
C series$S(x)$ uniform convergent, $u_n$ continuous$S$ is continuous
integral$f$ is continuous$F$ is continuous
rows$f_n$ can be integrated, $\{f_n\}$ uniform convergent$f_n$ can be integrated, $\int f(x)dx=\lim\limits_{n\rightarrow\infty}\int f_ndx$
I series$S(x)$ is uniform convergent, $u_n$ can be integrated$S$ can be integrated, $\int Sdx=\sum\int u_ndx$
integral$f$ is continuous$\int Fdy=\iint f(x,y)dxdy$
rows$\{f_n\}\in$C$^{-1}$; $\{f_n'\}$ unif.conv $\rightarrow\phi$$f'=\phi(x)$
D series$u_n\in$C$^{-1}$; $\sum u_n$ conv; $\sum u_n'$ u.c.$S'(x)=\sum u_n'(x)$
integral$\partial f/\partial y$ continuous$F_y=\int f_y(x,y)dx$

1.9 Products and quotients

For $a,b,c,d\in I\hspace{-1mm}R$ holds:\\ The distributive property: $(a+b)(c+d)=ac+ad+bc+bd$\\ The associative property: $a(bc)=b(ac)=c(ab)$ and $a(b+c)=ab+ac$\\ The commutative property: $a+b=b+a$, $ab=ba$.

Further holds: \[ \frac{a^{2n}-b^{2n}}{a\pm b}=a^{2n-1}\pm a^{2n-2}b+a^{2n-3}b^2\pm\cdots\pm b^{2n-1}~~~,~~~ \frac{a^{2n+1}-b^{2n+1}}{a+b}=\sum_{k=0}^n a^{2n-k}b^{2k} \] \[ (a\pm b)(a^2\pm ab+b^2)=a^3\pm b^3~,~~(a+b)(a-b)=a^2+b^2~,~~ \frac{a^3\pm b^3}{a+b}=a^2\mp ba+b^2 \]

1.10 Logarithms

Definition: $^a\log(x)=b\Leftrightarrow a^b=x$. For logarithms with base $e$ one writes $\ln(x)$.

Rules: $\log(x^n)=n\log(x)$, $\log(a)+\log(b)=\log(ab)$, $\log(a)-\log(b)=\log(a/b)$.

1.11 Polynomials

Equations of the type \[ \sum_{k=0}^n a_kx^k=0 \] have $n$ roots which may be equal to each other. Each polynomial $p(z)$ of order $n\geq1$ has at least one root in $\mathbb{C}$. If all $a_k\in I\hspace{-1mm}R$ holds: when $x=p$ with $p\in\mathbb{C}$ a root, than $p^*$ is also a root. Polynomials up to and including order 4 have a general analytical solution, for polynomials with order $\geq5$ there does not exist a general analytical solution.

For $a,b,c\in I\hspace{-1mm}R$ and $a\neq0$ holds: the 2nd order equation $ax^2+bx+c=0$ has the general solution: \[ x=\frac{-b\pm\sqrt{b^2-4ac}}{2a} \] For $a,b,c,d\in I\hspace{-1mm}R$ and $a\neq0$ holds: the 3rd order equation $ax^3+bx^2+cx+d=0$ has the general analytical solution: \begin{eqnarray*} x_1&=&~K-\frac{3ac-b^2}{9a^2K}-\frac{b}{3a}\\ x_2=x_3^*&=&-\frac{K}{2}+\frac{3ac-b^2}{18a^2K}-\frac{b}{3a}+i\frac{\sqrt{3}}{2}\left(K+\frac{3ac-b^2}{9a^2K}\right)\\ \end{eqnarray*} with $\displaystyle K=\left(\frac{9abc-27da^2-2b^3}{54a^3}+ \frac{\sqrt{3}\,\sqrt{4ac^3-c^2b^2-18abcd+27a^2d^2+4db^3}}{18a^2}\right)^{1/3}$

1.12 Primes

A prime is a number $\in I\hspace{-1mm}N$ that can only be divided by itself and 1. There are an infinite number of primes. Proof: suppose that the collection of primes $P$ would be finite, than construct the number $q=1+\prod\limits_{p\in P}p$, than holds $q=1(p)$ and so $Q$ cannot be written as a product of primes from $P$. This is a contradiction.

If $\pi(x)$ is the number of primes $\leq x$, than holds: \[ \lim_{x\rightarrow\infty}\frac{\pi(x)}{x/\ln(x)}=1~~~\mbox{and}~~~ \lim_{x\rightarrow\infty}\frac{\pi(x)}{\int\limits_2^x\frac{dt}{\ln(t)}}=1 \] For each $N\geq2$ there is a prime between $N$ and $2N$.

The numbers $F_k:=2^k+1$ with $k\in I\hspace{-1mm}N$ are called Fermat numbers. Many Fermat numbers are prime.

The numbers $M_k:=2^k-1$ are called Mersenne numbers. They occur when one searches for perfect numbers, which are numbers $n\in I\hspace{-1mm}N$ which are the sum of their different dividers, for example $6=1+2+3$. There are 23 Mersenne numbers for $k<12000$ which are prime: for $k\in\{2,3,5,7,13,17,19,31,61,89,107,127,521,$ $607,1279,2203,2281,3217,4253,4423,9689,9941,11213\}$.

To check if a given number $n$ is prime one can use a sieve method. The first known sieve method was developed by Eratosthenes. A faster method for large numbers are the 4 Fermat tests, who don't prove that a number is prime but give a large probability.

  1. Take the first 4 primes: $b=\{2,3,5,7\}$,
  2. Take $w(b)=b^{n-1}~{\rm mod}~n$, for each $b$,
  3. If $w=1$ for each $b$, then $n$ is probably prime. For each other value of $w$, $n$ is certainly not prime.