/images/Yiyang.png

Yiyang Dong

eCommerce Platform 01 - FrontEnd, React, UI Design

React Setup, Header and Footer Create a folder /eshop and set up React, 1 2 3 4 eShop % npx create-react-app frontend eShop % cd frontend frontend % npm start frontend % rm -rf .git React Bootstrap and ReactRouter 1 2 frontend % npm i react-bootstrap frontend % npm i react-router-dom react-router-bootstrap Move .gitignore from eshop/frontend to /eshop Create frontend/src/components/Header.js and Footer.js use <LinkContainer to> rather than <href> could avoid reload faster 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 // Header.

Statistical Learning Notes | SVM 2 - Support Vector Classifiers and Machines

Support Vector Classifiers 2.1. Motivation Disadvantages of Maximal Margin Classifier: A classifier based on a separating hyperplane will necessarily perfectly classify ALL of the training observations this can lead to sensitivity to individual observations a change in a single observation may lead to a dramatic change in the hyperplane it may have overfit the training data Improvement: Consider a classifier based on a hyperplane that does NOT perfectly separate the two classes, in the interest of:

Statistical Learning Notes | SVM 1 - HyperPlane, Support Vectors and Maximal Margin Classification

SVM 1. Maximal Margin Classifier 1.1. What is a HyperPlane? Definition: In a p-dimensional space, a HyperPlane is a flat affine subspace of dimension p-1 affine: the subspace need not pass through the origin if $p=2$, a hyperplane is a flat 1-dimensional subspace in other words, a Line => $\beta_0 + \beta_1X_1 + \beta_2X_2 = 0$ if $p=3$, a hyperplane is a flat 2-dimensional subspace in other words, a Plane If $p>3$: the p-dimensional hyperlane is defined by the equation: $$\beta_0 + \beta_1X_1 + \beta_2X_2 + … + \beta_pX_p= 0$$

Calculus 01

引言(暂时不看) 宇宙是高度数学化的(Universe is Deeply Mathematical) 这或许是包含我们在内的宇宙的唯一可行的存在方式,因为非数学化的宇宙无法庇护能够提出这个问题的智慧生命。 无论如何,一个神秘且不可思议的事实是,我们的宇宙遵循的自然律最终总能用微积分的语言和微分方程的形式表达出来。 这类方程能描述某个事物在这一刻和在下一刻之间的差异,或者某个事物在这一点和在与该点无限接近的下一个点之间的差异。 尽管细节会随着我们探讨的具体内容而有所不同,但自然律的结构总是相同的。 这个令人惊叹的说法也可以表述为,似乎存在着某种类似宇宙密码的东西,即一个能让万物时时处处不断变化的操作系统。 微积分利用了这种规则,并将其表述出来。 艾萨克·牛顿是最早瞥见这一宇宙奥秘的人。他发现行星的轨道、潮汐的韵律和炮弹的弹道都可以用一组微分方程来描述、解释和预测。 如今,我们把这些方程称为牛顿运动定律和万有引力定律。 自牛顿以来,每当有新的宇宙奥秘被揭开,我们就会发现同样的模式一直有效。 从古老的土、空气、火和水元素到新近的电子、夸克、黑洞和超弦,宇宙中所有无生命的东西都遵从微分方程的规则。 我敢打赌,这就是费曼说“微积分是上帝的语言”时想要表达的意思。如果有什么东西称得上宇宙的奥秘,那么非微积分莫属。

Statistical Learning Notes | Decision Tree 2 - Bagging & Random Forrest & Boosting

1.Bagging – Out-of-Bag (OOB) Error Estimation – Variable Importance Measures 2.Random Forest – Decorrelating the Trees 3.boosting – Algorithm – Three Tuning Parameters These three ensemble methods use trees as building blocks to construct more powerful prediction models 1. Bagging Bootstrap is used when it is hard or even impossible to directly compute the Standard Deviation. Bootstrap Aggregation (or bagging) can reduce the high-variance of decision trees high variance may result in fitting quite different trees on different part of same data A set of ${X_1,…,X_n}\ $ with variance $\sigma^2$