Posts

자기개발/ 동기부여 메이트 찾기 '두리' 서비스 오픈

Image
  1:1 루틴메이트 매칭 서비스 힘든 자기개발, 혼자 말고 두리 함께해요! 🎉 루틴 챌린지 이벤트 📊 도전 조건 앱 내에서 원하는 루틴 생성 한 달 이상 꾸준히 70% 이상 목표 달성 🏆 보상 추첨을 통해 스타벅스 쿠폰, CGV 영화 티켓 증정! 한 달간 고생한 두 분께 모두 드립니다. 참고사항 조건 만족 시 자동으로 참여됩니다. 2024년 12월 31일까지 생성된 루틴에만 적용됩니다. 📢 두리 체험단 모집 중! 🎁 특별한 혜택 스타벅스 커피 쿠폰 즉시 지급 모티코인 3,000개 지급 (3만원 상당) 매일 인증 성공 시 5만원 상품권 추가 지급! 📝 체험단 활동 원하는 자기개발 목표 설정하기 매일 SNS에 목표 달성률 공유하기 한 달 동안 성장하는 나의 모습 기록하기 📝 지원 방법 이벤트 페이지 내 구글 폼 제출 참고사항 챌린지 이벤트와 중복 참여 가능 🎉 **이벤트 사이트로 이동 🎉** 구글 플레이스토어 https://play.google.com/store/apps/details?id=com.s2g.dori&pli=1 애플 앱스토어 https://apps.apple.com/app/%EB%91%90%EB%A6%AC/id6639611555 ↑ 클릭시 앱 다운로드 페이지로 연결됩니다. 💭 문의 및 건의 사항 개발자 이메일 💌 stuvery0@gmail.com 무엇이든 의견을 보내주세요! 사업자 등록 정보 사업자 등록 번호 : 320-02-03526 전자상거래 라이선스번호 : 2024-경기김포-6495

the US and British attacks on Houthi and the impact of stock market

Image
the US and British attacks on Houthi and the impact of stock market  The US and UK launched airstrikes on more than a dozen sites used by the Iranian-backed Houthis in Yemen. This was a significant military response to the Houthis’ persistent campaign of drone and missile attacks on commercial ships in the Red Sea. The Houthis, backed by Iran, run most of the west of Yemen and control its Red Sea coastline. The strikes have raised fears of a broader escalation of the conflict in the region. The attacks led to an increase in oil prices. Brent jumped on fears there would be more disruption to shipping, and that the conflict could expand into a broader regional conflagration. Global benchmark Brent was trading around 4% higher. The rise in oil prices was driven by the market’s perception that this is an escalation of the conflict. The impact of rising oil prices on the stock market The impact of rising oil prices on the stock market is complex. An increase in oil prices usually lowers...

Consumer Price Index (CPI), Interest Rate, and Stock Price Forecast

Image
  Price of food What is CPI? The Consumer Price Index (CPI) is a measure that examines the weighted average of prices of a basket of consumer goods and services, such as transportation, food, and medical care. It is calculated by taking price changes for each item in the predetermined basket of goods and averaging them. Key points about CPI include: It measures the overall change in consumer prices based on a representative basket of goods and services over time. It is the most widely used measure of inflation, closely followed by policymakers, financial markets, businesses, and consumers. The CPI is based on about 80,000 price quotes collected monthly from some 23,000 retail and service establishments as well as 50,000 rental housing units. Housing rents are used to estimate the change in shelter costs including owner-occupied housing that accounts for about a third of the CPI. The CPI is a crucial tool for economic analysis and policy-making, and it’s also used to adjust pensions...

U.S. Stock Outlook 2024

Image
  Let's take a look at the 2024 U.S. stock outlook. To take a look at the 2024 U.S. stock outlook, we'll take a look based on recent announcements from Morgan Stanley and Forbes. What is Morgan Stanley? Morgan Stanley is an American multinational investment banking and financial services company headquartered at 1585 Broadway in Manhattan, New York. Morgan Stanley also offers a range of prospects to help raise, manage, and distribute funds. Morgan Stanley's Outlook Morgan Stanley said 2024 is an overvalued stock, so analysts' estimates of corporate earnings may be too optimistic given the easing of U.S. economic growth. Also, it said the market may be overestimating the number of Fed rate cuts in 2024. On the one hand, the stock market will provide investors with "greater opportunities," it said, "and the market has been efficient at rewarding and punishing the right stocks for the right fundamental reasons." Analysts predict the S&P 500 will end...

Reinforcement Learning Study - 7

Image
Deep Reinforcement Learning  In real world, states or action space is too big to record all information about the model in (value) table. So as to generalize the information, the most powerful generalization tool(function), Neural Network is used. A node is fundamental component of neural network, and each node linearly combines(WX + b) the inputs entering the node and then outputs them by applying a nonlinear function(sigmoid, ReLU etc...).   Value-based agent  value-based learning is method where neural netwrok is used to predict value function. In neural network, 'Loss function' is used to update parameter of neural network. Loss function is defined as the difference between the predicted value and the real value.  In Q-learning, value function is defined  .  so Loss is defined as below. In fact, in the above eqation, because we don't know real value Q, we can't use that equation. So, here we have smart way to solve this problem. That's the expected val...

Reinforcement Learning Study - 6

Image
 Q Learning (TD Control) On-Policy vs Off-Policy  Imagine you're watching a friend play a game. The friend will play better through the experience he played. How about you? We can also learn from watching a play of other people. In this situation, the learning method that the friends does is called 'On-Policy 'method. On the other hand, what you do is called 'Off-Policy' method. these methods could be defiend as follow   On-Policy : Target Policy == Behavior Policy   Off-Policy : Target Policy != Behavior Policy  Target policy is the objective that an agent want to train. And behavior policy is the policy that interact with the environment. In other words, we want to train our 'target policy', watching our friend's 'behavor policy'.  off-policy method has three advantages over on-policy method. first, an agent can reuse the previous experiences because target policy does not necessarily have to be the same as behavior policy. Second, high quality...

Reinforcement Learning Study - 5

Image
Contorl in Model-free MDP  From now on, we are going to learn about 3 methods to solve conrol problem, MC Control, SARSA, Q Learning)  Policy Iteration method which we learned before is good method to solve model-based problem. However in model-free MDP, we can't use the method because policy iteration uses bellman expectation equation that can be used when we do know about the model. And not knowing the next state when an agent select an action is another reason that we can't use the method. so we can't make greedy policy. MC Control  So here is the solution through some changes in policy iteration,  1. Use MC instead of bellman expectation equation. Using MC, we can evaluate each state in  empirically. 2. Use Q instead of V. Although an agent don't know what each action is mapped to each state, all an agent have to do is just selecting an action that has highest expectation of value. 3. Explore with probability of epsilon. In greedy way, if an action is eva...