Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Whirlwind Tour - History, Study notes of Mathematical Modeling and Simulation

Mainly for understanding and supplementary material for study

Typology: Study notes

2024/2025

Uploaded on 06/05/2025

LynnWang
LynnWang 🇺🇸

13 documents

1 / 7

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
History
Buffon’s Needle Problem
Buffon's Needle Problem is a classic problem in geometric probability, named after the French
mathematician Georges-Louis Leclerc, Comte de Buffon. The problem involves randomly
dropping a needle of length
L
onto a floor marked with parallel lines a distance
d
apart. The
probability that the needle will cross one of the lines can be expressed as:
P=(2L)/ ( π d )
This equation can be solved for
π
and the method can be used to estimate the value of
π
π=(2L)/ ( Pd )
Buffon’s Needle Example R Code
# Simulate Buffon's Needle Problem to estimate P and pi
set.seed(1)
# Define the number of trials
n_trials <- 10000000
# Define the length of the needle and the distance between the lines
L <- 1
d <- 2
# Generate n_trials random values for the angle that the needle points
# Can think of it as being with the vertical (sin) or horizontal (cos)
theta <- runif(n_trials, min = 0, max = pi)
# Generate n_trials random values for the position of the center of
the needle
x <- runif(n_trials, min = 0, max = d/2)
# Calculate the position of the ends of the needle
x_end1 <- x - (L/2) * sin(theta)
x_end2 <- x + (L/2) * sin(theta)
# Determine if the needle crosses a line
crosses <- ifelse(floor(x_end1/d) != floor(x_end2/d), 1, 0)
# Calculate the estimated value of P and pi
P_est <- mean(crosses)
pi_est <- (2 * L) / (P_est * d)
pf3
pf4
pf5

Partial preview of the text

Download Whirlwind Tour - History and more Study notes Mathematical Modeling and Simulation in PDF only on Docsity!

History

Buffon’s Needle Problem

Buffon's Needle Problem is a classic problem in geometric probability, named after the French mathematician Georges-Louis Leclerc, Comte de Buffon. The problem involves randomly

dropping a needle of length L onto a floor marked with parallel lines a distance d apart. The

probability that the needle will cross one of the lines can be expressed as:

P =( 2 L )/( π d )

This equation can be solved for π and the method can be used to estimate the value of π

π =( 2 L )/( Pd )

Buffon’s Needle Example R Code

# Simulate Buffon's Needle Problem to estimate P and pi set.seed( 1 ) # Define the number of trials n_trials <- 10000000 # Define the length of the needle and the distance between the lines L <- 1 d <- 2 _# Generate n_trials random values for the angle that the needle points

Can think of it as being with the vertical (sin) or horizontal (cos)_

theta <- runif(n_trials, min = 0 , max = pi) # Generate n_trials random values for the position of the center of the needle x <- runif(n_trials, min = 0 , max = d/ 2 ) # Calculate the position of the ends of the needle x_end1 <- x - (L/ 2 ) * sin(theta) x_end2 <- x + (L/ 2 ) * sin(theta) # Determine if the needle crosses a line crosses <- ifelse(floor(x_end1/d) != floor(x_end2/d), 1 , 0 ) # Calculate the estimated value of P and pi P_est <- mean(crosses) pi_est <- ( 2 * L) / (P_est * d)

# Calculate the true value of P and pi P_true <- ( 2 * L) / (pi * d) pi_true <- ( 2 * L) / (P_true * d) # Print the results cat("Estimated value of P:", P_est, "\n")

Estimated value of P: 0.

cat("Estimated value of pi:", pi_est, "\n")

Estimated value of pi: 3.

cat("True value of P:", P_true, "\n")

True value of P: 0.

cat("True value of pi:", pi_true, "\n")

True value of pi: 3.

Buffon’s Needle Example Python Code Python Notebook Beer and Student’s t distribution The Student's t-distribution, also known as the t-distribution, has an interesting history that dates back to the early 20th century. It was developed by English statistician William Sealy Gosset, who worked at the famous Guinness Brewery in Dublin, Ireland, under the pseudonym "Student." Gosset published his work on the t-distribution in 1908 in the paper titled "The Probable Error of a Mean." The story behind the development of the t-distribution is closely tied to the challenges Gosset faced while working at Guinness. At the time, the brewery was interested in improving the quality and consistency of their beer, which required careful monitoring of the ingredients and brewing processes. To achieve this, the brewery needed to analyze small samples of barley, yeast, and other ingredients to ensure their quality, as well as to monitor the brewing process. Traditional statistical methods, such as the Normal distribution, were not suitable for making inferences from small sample sizes, as they tended to underestimate the true variability in the data. This led Gosset to develop the t-distribution, which was specifically designed to address the problem of small sample sizes. The t-distribution is a family of probability distributions that are similar in shape to the normal distribution but with heavier tails, which allows for a more accurate estimation of the variability in small samples.

Von Neumann's most significant contribution to simulation and statistical methods was his work on self-replicating automata, which are theoretical constructs capable of replicating themselves in a cellular grid. This concept has had a profound impact on the development of simulation techniques, particularly in the study of complex systems and emergent phenomena. Industrial Applications: Manufacturing and Queueing Models The 1960s marked a significant period in the history of manufacturing and queueing models, as it saw the rapid development and adoption of computer simulation and statistical methods to analyze and optimize complex systems. This period was characterized by advances in both theoretical and practical aspects of manufacturing and queueing models, which were driven by the growing availability of powerful computers and the need for efficient manufacturing processes in the post-World War II era. Queueing theory is the mathematical study of waiting lines or queues. The foundations of queueing theory were laid by Danish engineer A.K. Erlang in the early 20th century, but it was during the 1960s that the field experienced significant growth. Researchers like David G. Kendall and John D.C. Little contributed to the development of advanced queueing models, such as multi-server, priority, and network queueing systems. These queueing models proved valuable for analyzing and optimizing a wide range of systems, including manufacturing processes, communication networks, and transportation systems. The application of queueing theory in manufacturing during the 1960s allowed companies to better manage production lines, reduce waiting times, and improve overall efficiency. Manufacturing Simulation: The 1960s also saw the development of discrete-event simulation (DES) techniques, which allowed for the modeling of complex systems like manufacturing processes. DES involves simulating the behavior of a system by modeling individual events (such as machine breakdowns or product arrivals) and advancing the system's state in discrete time steps. This approach proved to be a powerful tool for analyzing and optimizing manufacturing systems, as it enabled engineers and managers to test various scenarios and identify bottlenecks or inefficiencies. Notable simulation languages and tools emerged during this period, including the General Purpose Simulation System (GPSS) developed by Geoffrey Gordon in 1961 and SIMSCRIPT, developed by Harry Markowitz and Bernard Hausner in 1963. These tools facilitated the widespread adoption of simulation techniques in manufacturing and other industries. Statistical Process Control (SPC): The 1960s also saw the development and adoption of SPC techniques in manufacturing. SPC is a method for monitoring and controlling the quality of a manufacturing process by analyzing statistical data collected during production. The foundations of SPC were laid by Walter A. Shewhart in the 1920s, but the widespread adoption of SPC in

the 1960s was driven by the need for better quality control in response to growing global competition. During this period, advancements in statistical methods and computer technology enabled more sophisticated SPC techniques, such as control charts and process capability analysis. These methods allowed manufacturers to identify and address the sources of variability in their processes, leading to improved product quality and reduced waste. Development of Simulation Languages The history of simulation languages and the development of easy-to-use modeling tools with graphical interfaces can be traced from the early work of Harry Markowitz to more recent advancements in simulation technology. This evolution has made significant contributions to simulation and statistical methods by making them more accessible and efficient for researchers and practitioners across various fields. As mentioned earlier, Harry Markowitz, along with Bernard Hausner and others, developed SIMSCRIPT at the RAND Corporation in 1963. SIMSCRIPT was one of the first high-level simulation languages designed for modeling complex systems using discrete-event simulation techniques. This language made it easier to create and run simulation models, setting the stage for future developments in simulation languages and tools. Following the development of SIMSCRIPT, numerous other simulation languages and tools were created to cater to the growing needs of the simulation community. GPSS (General Purpose Simulation System) was developed by Geoffrey Gordon in 1961, which provided a more user-friendly approach to discrete-event simulation. Other notable simulation languages and tools include MODSIM, SLAM, and Arena, each with its unique features and capabilities. As computing power increased and graphical user interfaces (GUIs) became more common in the late 1980s and 1990s, simulation software developers began to create more user-friendly modeling tools that took advantage of the graphical capabilities of modern computers. This led to the development of simulation tools with intuitive drag-and-drop interfaces, making it easier for users to build, modify, and visualize their simulation models without the need for extensive programming knowledge. Some examples of these easy-to-use modeling tools with graphical interfaces include: Developed by Rockwell Automation, Arena is a widely used discrete-event simulation software that provides a visual modeling environment, allowing users to create models by dragging and dropping elements onto the canvas and defining their behavior using built-in dialogs. AnyLogic is a multi-method simulation software that supports system dynamics, agent- based, and discrete-event simulation approaches. It offers a user-friendly graphical

expanded the applications of system dynamics to diverse areas, such as economics, ecology, and social systems. Agent-Based Modeling (ABM), which involves simulating the behavior of individual agents and their interactions within a system, has gained prominence since the 1990s. The development of rigorous theoretical work in areas like cellular automata, complex adaptive systems, and swarm intelligence has laid the foundation for ABM techniques. ABM has been applied to various fields, including epidemiology, transportation, and social sciences. The development of stochastic optimization techniques, which involve optimizing systems under uncertainty, has been crucial to the advancement of simulation and modeling. Prominent methods like stochastic gradient descent, simulated annealing, and genetic algorithms have their roots in rigorous theoretical work on optimization, probability, and statistical mechanics. These methods have found applications in machine learning, operations research, and engineering. The resurgence of Bayesian statistics since the 1980s has been accompanied by significant advancements in computational methods for Bayesian inference. Techniques like Markov Chain Monte Carlo (MCMC), Gibbs sampling, and variational inference have emerged from the rigorous theoretical work on computational algorithms and probabilistic methods. These techniques have enabled researchers to tackle complex Bayesian models, leading to advancements in fields such as machine learning, data science, and spatial statistics.