Thursday 21 December 2017

Quantstrat forex handel


Handel med Bollinger Bands (R) Bollinger Bands reg kommer bracket pris åtgärd. I tider med hög volatilitet breddas de, medan de i tider med låg volatilitet rör sig närmare varandra. Så i grunden anpassar de sig till marknadens rörlighet och volatilitet. Detta extra flyktighetsfilter är det verkliga värdet av detta verktyg. Det finns två villkor vi letar efter i en handelsmöjlighet. Vi vill köpa en pullback för att stödja när marknaden är i en uptrend eller sälja en rally upp till motstånd när marknaden är i en downtrend. Bollinger Bands erbjuder vanligtvis bra motstånd och stöd för vår handel setup, så vi måste bara se till att vi följer de starka trending paren. Letrsquos tittar på ett exempel på detta USDCHF dagliga diagram. Skapat av FXCM Marketscope Charts 2.0 Trenden är uppe, eftersom vi kan se en serie högre höjder och högre nedgångar, vilket innebär att vi letar efter ett dip ner för att stödja (det lägre bandet) för en köpmöjlighet. Jag har två exempel på charthellipthe som först ägde rum i maj och den andra ägde rum i juni i år. Marknaden handlade ner till det nedre Bollinger-bandet i var och en av de fall som noterades i lådorna. Men det här är inte nödvändigtvis att köpa sig utan snarare bara signalen för att börja leta efter ett köp på en omkastning. Traders kommer att använda en mängd olika metoder för att bestämma posten, allt från att använda sin favoritindikator för att bara köpa när marknaden går upp genom den föregående höga. Ett populärt tillvägagångssätt är att köpa på det första ljuset som stänger över mittlinjen 20-dagars Enkelt rörande medelvärde. Detta tjänar som mer bekräftelse på omkastningen och ökar vår chans att lyckas på handeln. (I diagrammet ovan identifieras ldquobuy candlerdquo med en grön pil.) Handlare kan sedan placera sitt skyddsstopp under den lägsta vikten i lådan och leta efter dubbelt så mycket risk för vinst för ett 1: 2 risk: belöningsförhållande. Jag skulle vilja nämna att prisåtgärder på USDCHF flyttade ner och berörde det lägre Bollinger Band fyra gånger under de senaste dagarna. Det betyder att vi borde vara på utkik efter en annan köpmöjlighet. Men snarare än att bara köpa just nu, skulle det här vara dags att använda ditt tillvägagångssätt för att fastställa den köpinträde som ökar din chans att lyckas på handeln. Priset har gått upp sedan det lägre bandet testades förra veckan. Genom att utöva tålamod och disciplin och vänta på det första steget ovanför 20-dagars Enkla rörande medelvärde skulle det vara ett sätt att komma in i denna handel med hjälp av Bollinger Band-strategin som du just lärde dig. Ny på valutamarknaden Spara timmar för att ta reda på vad Forex trading handlar om. Ta detta gratis 20 minuter ldquoNew till FXrdquo kursen presenteras av DailyFX Education. I kursen kommer du att lära dig om grunderna i en Forex-transaktion, vilken hävstångseffekt det är och hur du bestämmer en lämplig mängd hävstångseffekt för din handel. Registe r HÄR för att starta din Forex trading nu DailyFX ger Forex nyheter och teknisk analys om de trender som påverkar de globala valutamarknaden. Kategorin Arkiv: Trading Detta inlägg kommer att introducera John Ehlers8217s Autocorrelation Periodogram mechanism8211a mekanism konstruerad för att dynamiskt hitta en lookback period. Det vill säga den vanligaste parametern som optimeras i backtests är återkallningsperioden. Innan du börjar det här inlägget måste jag ge kredit där det beror på en Mr Fabrizio Maccallini. chefen för strukturerade derivat på Nordea Markets i London. Du hittar resten av förvaret han gjorde för Dr. John Ehlers8217s Cycle Analytics för handlare på hans github. Jag är tacksam och hedrad att sådana intelligenta och erfarna individer hjälper till med att föra några av Ehlers8217s metoder till R. Pointen för Ehlers Autocorrelation Periodogram är att dynamiskt ställa in en period mellan ett minimum och en maximal längd. Medan jag lämnar mekanikerens exakta förklaring till Dr Ehlers8217s bok, för alla praktiska ändamål, anser jag att den här metoden är att försöka avlägsna en enorm källa till överfitting från handelssystemets skapelse8211 och specificera en lookback-period. SMA på 50 dagar 100 dagar 200 dagar Tja, den här algoritmen tar den möjligheten att överfatta ut ur dina händer. Ange bara en övre och nedre gräns för din lookback, och det gör resten. Hur bra det är, det är ett diskussionsämne för dem som är välkända i metoderna för elteknik (I8217m inte), så gärna lämna kommentarer som diskuterar hur bra algoritmen gör sitt jobb, och gärna blogga om det som väl. I alla fall härletar här8217s den ursprungliga algoritmkoden, tack vare Mr Maccallini: En sak som jag märker är att den här koden använder en slinga som säger för (jag i 1: längd (filt)), vilket är en O (datapunkter) slingan, som jag ser som pesten i R. Medan I8217ve använde Rcpp tidigare var it8217s bara för de mest grundläggande looparna, så det här är definitivt en plats där algoritmen kan stå för att förbättras med Rcpp på grund av R8217s inherent poor looping. De som är intresserade av den exakta logiken för algoritmen kommer återigen att hitta den i John Ehlers8217s Cycle Analytics for Traders bok (se länken tidigare i posten). Naturligtvis är det första att göra med att testa hur bra algoritmen gör vad den menar att göra, vilket är att diktera återkallningsperioden för en algoritm. Let8217s kör det på vissa data. Nu, hur ser den algoritm-seta lookback perioden ut som Let8217s zooma in 2001 till 2003, när marknaderna gick igenom en omvälvning. I den här inzoade bilden kan vi se att algoritm8217s uppskattningar verkar ganska hoppiga. Here8217s någon kod för att mata algoritm8217s uppskattningar av n till en indikator för att beräkna en indikator med en dynamisk återgångstid som fastställs av Ehlers8217s autokorrelationsperiodogram. Och här används funktionen med en SMA, för att ställa in mellan 120 och 252 dagar. Som sedd är denna algoritm mindre konsekvent än jag skulle vilja, åtminstone när det gäller att använda ett enkelt glidande medelvärde. För nu kommer I8217m att lämna denna kod här, och låta människor experimentera med det. Jag hoppas att någon kommer att finna att denna indikator är till hjälp för dem. Tack för att du läser. ANMÄRKNINGAR: Jag är alltid intresserad av networkingmeet-ups i nordöstra (PhiladelphiaNYC). Dessutom, om du tror att ditt företag kommer att dra nytta av mina färdigheter, tveka inte att nå ut till mig. Min länkprofil finns här. Slutligen är jag volontärarbete för att curate R-sektionen för böcker om kvantokratisering. Om du har en bok om R som kan användas för att finansiera, var noga med att meddela mig det, så att jag kan granska det och eventuellt rekommendera det. Tänk dig. Det här inlägget handlar om att försöka använda Depmix-paketet för prognos för online-tillstånd. Medan depmix-paketet utövar beundransvärt när det gäller att beskriva tidigare tillstånd, när det används för en-steg-framåt förutsägelse, under antagandet att tomorrow8217s-staten kommer att vara identisk med today8217s, kommer den dolda markovmodellprocessen som finns i paketet inte uppfylla förväntningarna. Så, för att börja med, motiverades detta inlägg av Michael Halls-Moore, som nyligen skrev lite R-kod om att använda depmixS4-biblioteket för att använda dolda markov-modeller. Generellt är jag stolt över att skapa inlägg om ämnen jag inte känner att jag har en helt förståelse för, men I8217m gör detta i hopp om att lära av andra om hur man på lämpligt sätt gör online-tillståndsutrymmet, eller 8220regime switching8221 upptäckt, som det kan kallas i mer finansiell parlance. Medan I8217ve sett den vanliga teorin om dolda markovmodeller (det kan det regna eller det kan vara soligt, men du kan bara avleda vädret döma av de kläder du ser på människor som bär utanför ditt fönster när du vaknar) och har arbetat med leksaksexemplar i MOOCs (Udacity8217s självkörande bilbana handlar om dem, om jag minns correctly8211or kanske det var AI-kursen), i slutet av dagen är teorin bara lika bra som hur bra en implementering kan fungera på verkliga data . För detta experiment bestämde jag mig för att ta SPY-data sedan starten och göra en fullständig in-sample 8220backtest8221 på data. Det är, med tanke på att HMM-algoritmen från depmix ser hela historien av retur, med denna 8220god8217s eye8221-bild av data, klassificerar algoritmen korrekt regimerna, om backtestresultaten är någon indikation här8217s koden för att göra det, inspirerad av Dr Halls-Moore8217s. I grund och botten, medan jag valde tre stater, noterade jag att allt med ett avbrott över noll är ett tjurläge, och under noll är en björnstat, så i huvudsak reduceras den till två stater. Med resultatet: Så, inte särskilt hemskt. Algoritmen fungerar, typ av, typ av, rätt Tja, let8217s försöker online förutsägelse nu. Så det jag gjorde här var att jag tog ett expanderande fönster från 500 dagar sedan SPY8217 började och fortsatte att öka den, en dag i taget. Min förutsägelse var trivialt, den senaste dagen, med en 1 för en tjurstat och en -1 för en björnstat. Jag körde denna process parallellt (på ett Linux-kluster, eftersom Windows8217s DoParallel-bibliotek verkar inte ens veta att vissa paket är laddade och it8217s mer rörigt), och den första stora frågan är att den här processen tog ungefär tre timmar på sju kärnor för cirka 23 års data. Inte precis uppmuntrande, men datortid är dyra i dessa dagar. Så let8217s ser om den här processen verkligen fungerar. För det första testar let8217s om algoritmen gör vad it8217s egentligen skulle göra och använder en dag med framåtblickande bias (det vill säga algoritmen berättar staten i slutet av dagen82, hur är det rätt för den dagen). Med resultatet: Således verkar algoritmen göra vad den var avsedd att göra, vilket är att klassificera en stat för en viss dataset. Nu, den mest relevanta frågan: Hur bra gör de här förutsägelserna en dag framåt? You8217d tror att statliga rymds förutsägelser skulle vara parsimoniska från dag till dag, med tanke på den långa historien, rätta med resultatet: Det är, utan lookahead bias, State Space prediction algoritmen är skrämmande. Varför är det bra, här är statens plot: Kort sagt, onlinehmm-algoritmen i depmix-paketet tycks ändra sitt sinne väldigt lätt, med uppenbara (negativa) konsekvenser för faktiska handelsstrategier. Så det sätter det upp för det här inlägget. I huvudsak är huvudbudskapet här: det är en stor skillnad mellan att ladda beskrivande analys (AKA 8220 har du varit, varför hände saker8221) vs prediktiv analys (det vill säga 8220 om jag korrekt förutsäger framtiden får jag en positiv utbetalning8221 ). Enligt min mening har beskrivande statistik sitt syfte när det gäller att förklara varför en strategi kan ha utfört hur det gjorde, i slutändan söker vi alltid bättre förutsägningsverktyg. I det här fallet verkar depmix, åtminstone inte i denna 8220out-of-box8221-demonstration, vara verktyget för det. Om någon har haft framgång med att använda depmix (eller annan regim-omkopplingsalgoritm i R) för förutsägelse, skulle jag gärna se arbete som beskriver förfarandet som tagits, eftersom it8217s är ett område I8217m som vill expandera min verktygslåda in i, men don8217t har någon speciell bra leder. I stort sett tycker jag att jag tycker om detta inlägg som jag beskriver mina egna erfarenheter med paketet. Tack för att du läser. OBS: Den 5 oktober kommer jag att vara i New York City. Den 6 oktober presenterar jag på The Trading Show på Programming Wars-panelen. OBS! Mitt nuvarande analytikerkontrakt är under granskning i slutet av året, så jag söker officiellt även andra erbjudanden. Om du har en heltidsroll som kan dra nytta av de färdigheter du ser på min blogg, var god kontakta mig. Min länkprofil finns här. Detta inlägg kommer att visa hur man tar hänsyn till omsättning när det handlar om returbaserad data med hjälp av PerformanceAnalytics och Return. Portfolio-funktionen i R. Det kommer att visa detta på en grundläggande strategi för de nio sektorerna SPDR. Så först är det här som svar på en fråga som ställs av en Robert Wages på R-SIG-Finance mailinglistan. Även om det finns många individer där ute med en mängd frågor (många av dem kan visas på den här bloggen redan), kommer det ibland att bli en veteran från industrin, en doktorand från Stanford eller annan väldigt intelligent person som kommer att ställa en fråga om ett ämne som jag ännu inte har berört på den här bloggen, vilket kommer att leda till ett inlägg för att visa en annan teknisk aspekt som finns i R. Detta är en av dessa tider. Så den här demonstrationen handlar om att beräkna omsättning i returutrymme med PerformanceAnalytics-paketet. Utanför PortfolioAnalytics-paketet är PerformanceAnalytics med ReturnPortfolio-funktionen go-to-paketet för simulering av portföljhantering, eftersom det kan ta en uppsättning vikter, en uppsättning avkastningar och generera en uppsättning portföljavkastningar för analys med resten av PerformanceAnalytics8217s funktioner. Återigen är strategin här: Ta de 9 tre bokstavssektorerna SPDR (eftersom det finns fyra bokstäver ETFs nu), och i slutet av varje månad, om det justerade priset ligger över det 200-dagars glidande genomsnittet, investera i det . Normalisera över alla investerade sektorer (det vill säga 19: e om det investeras i alla 9, 100 till 1 om endast 1 investeras i 100 kontanter, betecknad med noll returvektor, om inga sektorer är investerade i). It8217 är en enkel leksaksstrategi, eftersom strategin är en demonstration. Here8217s grundläggande konfigurationskod: Hämta SPDR, sätt dem ihop, beräkna deras avkastning, generera signalen och skapa nollvektorn, eftersom Return. Portfolio behandlar vikter mindre än 1 som uttag och vikt över 1 som tillägg av mer kapital (stort FYI här). Nu, här8217s hur man beräknar omsättning: Så, tricket är det här: När du ringer Return. portfolio använder du den verbala SÄRSKILDA alternativet. Detta skapar flera objekt, bland dem återvänder, BOP. Weight och EOP. Weight. Dessa står för början av periodens vikt och slutet av periodens vikt. Hur omsättningen beräknas är helt enkelt skillnaden mellan hur dag8217s avkastning flyttar den tilldelade portföljen från dess tidigare slutpunkt till den portfölj som faktiskt står i början av nästa period. Det vill säga slutet av periodens vikt är början av perioden drift efter att ha beaktat day8217s driftreturn för den tillgången. Den nya början av periodvikten är slutet på periodvikten plus eventuella transaktioner som skulle ha gjorts. För att hitta de faktiska transaktionerna (eller omsättningen) subtraherar man således den tidigare delen av periodens vikt från början av periodens vikt. Så här ser sådana transaktioner ut för denna strategi. Något som vi kan göra med sådan data är att en rullande omsättning på ett år utförs med följande kod: Det ser ut så här: Detta innebär i huvudsak att tvåårsomsättningens värde är one year217s (det vill säga om en helt investerad portfölj säljs 100 omsättning, och att köpa en helt ny uppsättning tillgångar är ytterligare 100, då tvåvägsomsättningen är 200) är 800 på max. Det kan vara ganska högt för vissa människor. Nu är here8217s ansökan när du bestraffar transaktionskostnader med 20 punkter per procentenhet som handlas (det kostar 20 cent att handla 100). Så, med 20 procentpoäng på transaktionskostnader, tar det ungefär en procent i avkastning per år utifrån denna (till synes, hemska) strategi. Detta är långt ifrån försumbart. Så det är hur du faktiskt beräknar omsättnings - och transaktionskostnader. I det här fallet var transaktionskostnadsmodellen mycket enkel. Med tanke på att Return. portfolio returnerar transaktioner på den enskilda tillgångsnivån kan man dock bli så komplicerad som de skulle vilja med att modellera transaktionskostnaderna. Tack för att du läser. OBS! Jag ska ge ett blixttal vid RFinance, så för dem som deltar kan du hitta mig där. Detta inlägg kommer att skissera ett lättillgängligt misstag genom att skriva vektoriserade backtests8211namn vid användning av en signal som erhölls vid slutet av en period för att ange (eller avsluta) en position under samma period. Skillnaden i resultat man får är enorm. Idag såg jag två separata inlägg från Alpha Architect och Mike Harris, båda refererade till ett papper av Valeriy Zakamulin om att någon tidigare trendundersökning av Glabadanidis gjordes med skumma resultat och att Glabadanidis8217s resultat endast var reproducerbara genom att initiera lookahead bias. Följande kod visar hur man reproducerar denna lookahead-bias. Först kommer uppläggningen av en grundläggande glidande medelstrategi på SampP 500-indexet från så långt bakåt som Yahoo-data att ge. Och här är hur man inrättar lookahead-bias. Dessa är 8220resultat8221: Naturligtvis är denna egenkapitalkurva oanvändbar, så här8217s en i loggskala. Som det kan ses, ser lookahead bias en stor skillnad. Här är de numeriska resultaten: Återigen, helt löjligt. Observera att när du använder Return. Portfolio (funktionen i PerformanceAnalytics) kommer det här paketet automatiskt att ge dig nästa period8217s retur, istället för den nuvarande, för dina vikter. Men för de som skriver 8220simple8221 backtest som snabbt kan göras med hjälp av vektoriserade operationer, kan ett fel för att göra hela skillnaden mellan en backtest inom rimligt och rent nonsens. Men om man vill testa för nämnda nonsens när man möter omöjliga att replikera resultat är mekaniken som visas ovan sättet att göra det. Nu, på andra nyheter: Jag vill tacka Gerald M för att stanna ovanpå en av de Logical Invest-strategierna8211namn, deras enkla globala marknadsrotationsstrategi framgår av en artikel från ett tidigare blogginlägg. Fram till mars 2015 (datumet för blogginlägget) hade strategin fungerat bra. Men efter det datumet har det varit en fullständig katastrof, som i efterhand var tydlig när jag passerade den genom den hypotesdrivna utvecklingsramprocessen som jag skrev om tidigare. Så medan det har skrivits en hel del om att inte bara slänga en strategi på grund av kortsiktig underprestanda, och att avvikelser som momentum och värde existerar på grund av karriärrisk på grund av den kortsiktiga underprestanda, är det aldrig bra när en strategi skapar historiskt stora förluster, särskilt efter att ha publicerats i ett så ödmjukt hörn av den kvantitativa finansiella världen. I vilket fall som helst var det här ett inlägg som demonstrerade några mekanikar, och en uppdatering av en strategi som jag bloggade om inte för länge sedan. Tack för att du läser. OBS! Jag är alltid intresserad av att höra om nya möjligheter som kan dra nytta av min expertis och är alltid glad att nätverket. Du hittar min LinkedIn-profil här. Gott nytt år. Det här inlägget kommer att vara en snabb som täcker förhållandet mellan det enkla glidande mediet och tidsseriens momentum. Implikationen är att man potentiellt kan härleda bättre tidsserie momentumindikatorer än den klassiska som tillämpas i så många papper. Okej, så huvudidén för det här inlägget är ganska enkelt: I8217m, vi vet att vi är bekanta med klassisk momentum. Det vill säga priset nu jämfört med priset för länge sedan (3 månader, 10 månader, 12 månader, etc.). T. EX. P (nu) 8211 P (10) Och I8217m säker på att alla är bekanta med den enkla glidande medelindikatorn också. T. EX. SMA (10). Tja, som det visar sig, är dessa två kvantiteter faktiskt relaterade. Det visar sig att om istället för att uttrycka momentum som skillnaden mellan två siffror, uttrycks den som summan av avkastningen, den kan skrivas (för 10 månaders momentum) som: MOM10 återkomst av denna månad återkomst förra månaden återkomst av 2 månader sedan 8230 återkomst av 9 månader sedan, totalt 10 månader i vårt lilla exempel. Detta kan skrivas som MOM10 (P (0) 8211 P (1)) (P (1) 8211 P (2)) 8230 (P (9) 8211 P (10)). (Varje skillnad inom parentes betecknar en månad8217s avkastning.) Vilken kan sedan omskrivas av associativ aritmetik som: (P (0) P (1) 8230 P (9)) 8211 (P (1) P (2) 8230 P (10)). Med andra ord, momentum 8212 aka skillnaden mellan två priser, kan omskrivas som skillnaden mellan två kumulativa summor av priser. Och vad är ett enkelt rörligt medelvärde Bara en kumulativ summa av priser dividerat med hur många priser som summeras över. Här8217s lite R-kod för att visa. Med det resulterande antalet gånger är dessa två signaler lika: Kort sagt varje gång. Nu, vad exakt är det här lilla exemplet här? Here8217s punchline: Det enkla glidande medlet är 8230 relativt enkelt vad gäller filter. Det fungerar som ett pedagogiskt exempel, men det har några välkända svagheter med avseende på fördröjning, fönsters effekter och så vidare. Här är en leksak exempel på hur man kan få en annan momentumsignal genom att byta filter. Med följande resultat: Även om skillnaden i EMA10-strategin didn8217t gjorde det bättre än skillnaden mellan SMA10 (aka standard 10-månaders momentum), är that8217s inte punkten. Poängen är att momentumsignalen härrör från ett enkelt glidande medelfilter, och att man med hjälp av ett annat filter fortfarande kan använda en momentumtyp av strategi. Eller sätt på annat sätt, den maingenerala borttagningen här är att momentan är ett lutning av ett filter, och man kan beräkna momentum på ett oändligt antal sätt beroende på vilket filter som används och kan komma upp i en mängd olika momentumstrategier. Tack för att du läser. OBS! Jag är för närvarande anställd i Chicago och är alltid öppen för nätverk. Kontakta mig på min email på ilya. kipnisgmail eller hitta mig på LinkedIn här. I det här inlägget beskrivs ett första misslyckat försök att tillämpa ensemblefiltermetoden för att försöka komma med en viktprocess på SPY som teoretiskt skulle vara en gradvis process att flytta från övertygelse mellan en tjurmarknad, en björnmarknad och var som helst däremellan. Detta är en uppföljningsstation till det här blogginlägget. Så, mitt tänkande gick så här: på en tjurmarknad, som en övergång från lyhördhet, bör responsiva filter vara högre än släta filter och vice versa, eftersom det vanligtvis är en avvägning mellan de två. Faktum är att i min speciella formulering, kvantiteten av kvadratroten av EMA av kvadrerad återvändande straffar någon avvikelse från en platt linje helt och hållet (även om den inspireras av Basel8217s mått på volatilitet, vilket är kvadroten av den 18-dagars EMA i kvadrat returnerar), medan svarskvantiteten strax avviker från tidsserierna för de realiserade priserna. Om dessa är de två bästa åtgärderna för jämnhet och lyhördhet är ett ämne I8217d uppskattar verkligen feedback på. I vilket fall som helst, en idé som jag hade på toppen av mitt huvud var att förutom att ha ett sätt att väga flera filter genom deras responsivitet (avvikelse från prisåtgärder) och jämnhet (avvikelse från en platt linje), att genom att ta summan av tecknet på skillnaden mellan ett filter och dess granne på svaghetsspektrumets känslighet, tillhandahöll tillräckligt med ensembelfilter (säg 101, så det finns 100 skillnader) skulle man få ett sätt att flytta från full övertygelse om en tjurmarknad, till en björnmarknad, till någonting däremellan, och detta är en mjuk process som inte har schizofrena svängningar av övertygelse. Here8217s koden för att göra detta på SPY från början till 2003: Och här8217s det mycket underwhelming resultatet: I huvudsak, medan jag förväntade mig att se förändringar i övertygelse om kanske 20 mest, istället min indikator på summan av teckenskillnader gjorde exakt som jag hade hoppades att det skulle vara 8282, vilket är en mycket binär slags mekaniker. Min intuition var att mellan en obekväm tjurmarknad8221 och en 8220obvious björnmarknad8221 att vissa skillnader skulle vara positiva, några negativa, och att de skulle utjämna varandra, och övertygelsen skulle vara noll. Dessutom är det att varje enskild crossover är binär, alla hundra tecken som är positiva eller negativa skulle vara en mer gradvis process. Tydligen var detta inte fallet. För att fortsätta detta tänkande senare, skulle en sak att försöka vara en signifikansskillnad för alla par. Visst, jag tycker inte om att ge upp den här idén just nu, och som vanligt skulle feedback alltid uppskattas. Tack för att du läser. OBS! Jag är för närvarande rådgivande i en analyskapacitet i centrala Chicago. Jag söker emellertid också samarbetare som vill driva intressanta handelsideer. Om du känner att mina färdigheter kan vara till hjälp för dig, pratar let8217s. Du kan maila mig på ilya. kipnisgmail, eller hitta mig på LinkedIn här. Den här översynen handlar om Inovance Tech8217s TRAIDE-system. Det är en applikation som syftar till att låta detaljhandlare investera proprietära maskininlärningsalgoritmer för att hjälpa dem att skapa systematiska handelsstrategier. För närvarande är min enrapportering att medan jag hoppas att företagets grundare är bra, är ansökan fortfarande i ett tidigt skede, och så bör det kontrolleras av potentiella användare att kapitalisterna är något som bevisar potential, snarare än en färdig produkt redo för massmarknaden. Även om detta fungerar som en recension, är det också mina tankar om hur Inovance Tech kan förbättra sin produkt. Lite bakgrund: Jag har talat flera gånger till några av företagets grundare, som låter som individer på ungefär min åldersnivå (så med andra årtusenden). I slutändan är försäljningsstället här: Systematisk handel är cool. Maskininlärning är cool. Att tillämpa maskininlärning till systematisk handel är därför häftigt (och ett säkert sätt att tjäna vinster, som Renaissance Technologies har visat.) Även om det här låter lite snarkigt, är det också på vissa sätt sant. Maskininlärning har blivit kommunens samtal, från IBM8217s Watson (RenTec hyrde en massa taligenkännandexperter från IBM ett par årtionden tillbaka), till Stanford8217s självkörande bil (uppfunnad av Sebastian Thrun, som nu är ansvarig för Utacity), till Netflixpriset, för att Gud vet vad Andrew Ng gör med djupt lärande vid Baidu. Med tanke på hur bra maskininlärning har gjort på mycket mer komplicerade uppgifter än 8220 skapar en halv anständig systematisk handelsalgoritm8221, borde det inte vara för mycket att fråga detta kraftfulla fält i skärningspunktet för datavetenskap och statistik för att hjälpa den detaljistinvesterare som limmade att titta på diagrammen genererar mycket mer avkastning på sina investeringar än genom diskretionär kartläggning och bullerhandel. Till min förståelse från konversationer med Inovance Tech8217s grundare är detta uttryckligen deras uppdrag. Men jag är inte säker på att Inovance8217s TRAIDE-ansökan faktiskt uppnår detta uppdrag i sitt nuvarande tillstånd. Here8217s hur det fungerar: Användare väljer en tillgång i taget och väljer ett datumintervall (data går tillbaka till 31 december 2009). Tillgångar är för närvarande begränsade till mycket likvida valutapar och kan ta följande inställningar: 1 timme, 2 timme, 4 timme, 6 timmars eller dagliga tidsramar. Användare väljer sedan från en mängd olika indikatorer, allt från tekniska (glidande medelvärden, oscillatorer, volymberäkningar etc. Många gånger har ett sortiment av indikatorer från 20-talet, trots det tillfälliga adaptiva glidande medlet, lyckats smyga in8211namn KAMA8211se min DSTrading-paket och MAMA8211aka Mesa Adaptive Moving Average, från John Ehlers) till mer esoteriska sådana som vissa känslighetsindikatorer. Här, där saker börjar börja södra för mig, emellertid. Namnlösa: Det är sålunda lätt att lägga till så många indikatorer som användaren vill ha, men det finns i grunden ingen dokumentation på någon av dem, utan länkar till referens mm, så användarna måste klara sig av att faktiskt förstå vad varje En av de indikatorer de väljer gör faktiskt, och huruvida dessa indikatorer är användbara. TRAIDE-applikationen gör det hittills inte möjligt för användare att bekanta sig med syftet med dessa indikatorer, vad deras teoretiska mål är (mäta övertygelse i en trend, upptäcka en trend, indikator för oscillatortyp osv.) När det gäller indikatorval , anger användarna också en parameterinställning för varje indikator per strategi. T. EX. Om jag hade en EMA-crossover, måste I8217d skapa en ny strategi för en 20100 crossover, en 21100 crossover, istället för att ange något sådant: kort EMA: 20-60 lång EMA: 80-200 Quantstrat själv har denna funktion och samtidigt Jag minns inte om att parameterns robusthet kontrolloptimering (med andra ord testning av flera parametersatser8211, varav en använder dem för optimering eller robusthet är upp till användaren, inte funktionaliteten) i kvantstrat på denna blogg specifikt finns denna information mycket i vad jag anser 8220 officiell quantstrat manual8221, hittad här. Enligt min mening är möjligheten att täcka ett antal värden obligatoriskt för att visa att någon given parameterinställning inte är en slumpmässig fluke. Utanför kvantstrat har jag demonstrerat denna metod i mina hypotesdrivna utvecklingsposter och att komma fram till parameterval för volatilitetshandel. Om TRAIDE kan göra något intressant är emellertid att efter att användaren specificerat sina indikatorer och parametrar bestämmer dess 8220proprietary machine learning8221-algoritmer (VARNING: HELT BLACK BOX) för vilket värdeområde de aktuella indikatorerna gav bästa resultat inom backtestet , och tilldela dem bullishness och bearishness poäng. Med andra ord, 8220visande bakåt, var dessa indikatorvärdena som gjorde bäst under provets gång8221. Även om det finns ett bestämt värde för att utforska relationerna mellan indikatorer och framtida avkastning, tror jag att TRAIDE behöver göra mer inom detta område, såsom rapportering av P-värden, övertygelse och så vidare. Till exempel, om du kombinerar tillräckligt med indikatorer, är din 8220rule8221 en marknadsordning that8217s är helt enkelt korsningen av alla dina indikatorintervall. Till exempel kan TRAIDE berätta för en användare att den starkaste hausseasignalen när skillnaden mellan de rörliga medelvärdena är mellan 1 och 2, ADX är mellan 20 och 25, ATR är mellan 0,5 och 1 osv. Varje inställning som användaren väljer ytterligare begränsar antalet transaktioner som simuleringen gör. Enligt min mening finns det fler sätt att utforska samspelet mellan indikatorer än bara ett jätte OCH-uttalande, såsom ett uttalande från 8220OR8221, av något slag. (E. G. välj alla värden, sätt på en handel när 3 av 5 indikatorer faller i det valda hausseområdet för att placera fler affärer). Även om det kan vara klokt att filtrera ner trader i mycket sällsynta fall om handel med massiva instrument, t. ex. flera tusen möjliga instrument, handlas endast flera, vid en viss tidpunkt, med TRAIDE väljer en användare endast en tillgångsklass (för närvarande , ett valutapar) i taget, så I8217m hoppas att se TRAIDE skapa mer funktionalitet vad gäller vad som utgör en handelsregel. Efter att användaren valt både en lång och en kort regel (genom att helt enkelt filtrera på indikatorområden som TRAIDE8217s maskininlärningsalgoritmer har sagt är bra) omvandlar TRAIDE det till en backtest med en lång aktiekurva, kort aktiekurva, den totala kapitalkurvan och handelsstatistik för aggregerade, långa och korta affärer. I kvantstrat får man till exempel enbart handelsstatistik. Oavsett om det är långt eller kort är allt som är viktigt för att kvantstrata huruvida handeln har gjorts eller förlorat pengar. För sofistikerade användare är it8217 trivial nog för att slå en uppsättning regler på eller av, men TRAIDE gör mer för att hålla användarhandboken i det avseendet. Slutligen genererar TRAIDE sedan MetaTrader4-kod för en användare att ladda ner. Och that8217s processen. In my opinion, while what Inovance Tech has set out to do with TRAIDE is interesting, I wouldn8217t recommend it in its current state. For sophisticated individuals that know how to go through a proper research process, TRAIDE is too stringent in terms of parameter settings (one at a time), pre-coded indicators (its target audience probably can8217t program too well), and asset classes (again, one at a time). However, for retail investors, my issue with TRAIDE is this: There is a whole assortment of undocumented indicators, which then move to black-box machine learning algorithms. The result is that the user has very little understanding of what the underlying algorithms actually do, and why the logic he or she is presented with is the output. While TRAIDE makes it trivially easy to generate any one given trading system, as multiple individuals have stated in slightly different ways before, writing a strategy is the easy part. Doing the work to understand if that strategy actually has an edge is much harder. Namely, checking its robustness, its predictive power, its sensitivity to various regimes, and so on. Given TRAIDE8217s rather short data history (2010 onwards), and coupled with the opaqueness that the user operates under, my analogy would be this: It8217s like giving an inexperienced driver the keys to a sports car in a thick fog on a winding road. Nobody disputes that a sports car is awesome. However, the true burden of the work lies in making sure that the user doesn8217t wind up smashing into a tree. Overall, I like the TRAIDE application8217s mission, and I think it may have potential as something for the retail investors that don8217t intend to learn the ins-and-outs of coding a trading system in R (despite me demonstrating many times over how to put such systems together). I just think that there needs to be more work put into making sure that the results a user sees are indicative of an edge, rather than open the possibility of highly-flexible machine learning algorithms chasing ghosts in one of the noisiest and most dynamic data sets one can possibly find. My recommendations are these: 1) Multiple asset classes 2) Allow parameter ranges, and cap the number of trials at any given point (E. G. 4 indicators with ten settings each 10,000 possible trading systems blow up the servers). To narrow down the number of trial runs, use techniques from experimental design to arrive at decent combinations. (I wish I remembered my response surface methodology techniques from my master8217s degree about now) 3) Allow modifications of order sizing (E. G. volatility targeting, stop losses), such as I wrote about in my hypothesis-driven development posts. 4) Provide some sort of documentation for the indicators, even if it8217s as simple as a link to investopedia (preferably a lot more). 5) Far more output is necessary, especially for users who don8217t program. Namely, to distinguish whether or not there is a legitimate edge, or if there are too few observations to reject the null hypothesis of random noise. 6) Far longer data histories. 2010 onwards just seems too short of a time-frame to be sure of a strategy8217s efficacy, at least on daily data (may not be true for hourly). 7) Factor in transaction costs. Trading on an hourly time frame will mean far less PampL per trade than on a daily resolution. If MT4 charges a fixed ticket price, users need to know how this factors into their strategy. 8) Lastly, dogfooding. When I spoke last time with Inovance Tech8217s founders, they claimed they were using their own algorithms to create a forex strategy, which was doing well in live trading. By the time more of these suggestions are implemented, it8217d be interesting to see if they have a track record as a fund, in addition to as a software provider. If all of these things are accounted for and automated, the product will hopefully accomplish its mission of bringing systematic trading and machine learning to more people. I think TRAIDE has potential, and I8217m hoping that its staff will realize that potential. Tack för att du läser. NOTE: I am currently contracting in downtown Chicago, and am always interested in networking with professionals in the systematic trading and systematic asset managementallocation spaces. Find my LinkedIn here. EDIT: Today in my email (Dec. 3, 2015), I received a notice that Inovance was making TRAIDE completely free. Perhaps they want a bunch more feedback on it This post will demonstrate a method to create an ensemble filter based on a trade-off between smoothness and responsiveness, two properties looked for in a filter. An ideal filter would both be responsive to price action so as to not hold incorrect positions, while also be smooth, so as to not incur false signals and unnecessary transaction costs. So, ever since my volatility trading strategy, using three very naive filters (all SMAs) completely missed a 27 month in XIV. I8217ve decided to try and improve ways to create better indicators in trend following. Now, under the realization that there can potentially be tons of complex filters in existence, I decided instead to focus on a way to create ensemble filters, by using an analogy from statisticsmachine learning. In static data analysis, for a regression or classification task, there is a trade-off between bias and variance. In a nutshell, variance is bad because of the possibility of overfitting on a few irregular observations, and bias is bad because of the possibility of underfitting legitimate data. Similarly, with filtering time series, there are similar concerns, except bias is called lag, and variance can be thought of as a 8220whipsawing8221 indicator. Essentially, an ideal indicator would move quickly with the data, while at the same time, not possess a myriad of small bumps-and-reverses along the way, which may send false signals to a trading strategy. So, here8217s how my simple algorithm works: The inputs to the function are the following: A) The time series of the data you8217re trying to filter B) A collection of candidate filters C) A period over which to measure smoothness and responsiveness, defined as the square root of the n-day EMA (2(n1) convention) of the following: a) Responsiveness: the squared quantity of pricefilter 8211 1 b) Smoothness: the squared quantity of filter(t)filter(t-1) 8211 1 (aka R8217s return. calculate) function D) A conviction factor, to which power the errors will be raised. This should probably be between .5 and 3 E) A vector that defines the emphasis on smoothness (vs. emphasis on responsiveness), which should range from 0 to 1. Here8217s the code: This gets SPY data, and creates two utility functions8211xtsApply, which is simply a column-based apply that replaces the original index that using a column-wise apply discards, and sumIsNa, which I use later for counting the numbers of NAs in a given row. It also creates my candidate filters, which, to keep things simple, are just SMAs 2-250. Here8217s the actual code of the function, with comments in the code itself to better explain the process from a technical level (for those still unfamiliar with R, look for the hashtags): The vast majority of the computational time takes place in the two xtsApply calls. On 249 different simple moving averages, the process takes about 30 seconds. Here8217s the output, using a conviction factor of 2: And here is an example, looking at SPY from 2007 through 2011. In this case, I chose to go from blue to green, orange, brown, maroon, purple, and finally red for smoothness emphasis of 0, 5, 25, 50, 75, 95, and 1, respectively. Notice that the blue line is very wiggly, while the red line sometimes barely moves, such as during the 2011 drop-off. One thing that I noticed in the course of putting this process together is something that eluded me earlier8211namely, that naive trend-following strategies which are either fully long or fully short based on a crossover signal can lose money quickly in sideways markets. However, theoretically, by finely varying the jumps between 0 to 100 emphasis on smoothness, whether in steps of 1 or finer, one can have a sort of 8220continuous8221 conviction, by simply adding up the signs of differences between various ensemble filters. In an 8220uptrend8221, the difference as one moves from the most responsive to most smooth filter should constantly be positive, and vice versa. In the interest of brevity, this post doesn8217t even have a trading strategy attached to it. However, an implied trading strategy can be to be long or short the SPY depending on the sum of signs of the differences in filters as you move from responsiveness to smoothness. Of course, as the candidate filters are all SMAs, it probably wouldn8217t be particularly spectacular. However, for those out there who use more complex filters, this may be a way to create ensembles out of various candidate filters, and create even better filters. Furthermore, I hope that given enough candidate filters and an objective way of selecting them, it would be possible to reduce the chances of creating an overfit trading system. However, anything with parameters can potentially be overfit, so that may be wishful thinking. All in all, this is still a new idea for me. For instance, the filter to compute the error terms can probably be improved. The inspiration for an EMA 20 essentially came from how Basel computes volatility (if I recall, correctly, it uses the square root of an 18 day EMA of squared returns), and the very fact that I use an EMA can itself be improved upon (why an EMA instead of some other, more complex filter). In fact, I8217m always open to how I can improve this concept (and others) from readers. Tack för att du läser. NOTE: I am currently contracting in Chicago in an analytics capacity. If anyone would like to meet up, let me know. You can email me at ilya. kipnisgmail, or contact me through my LinkedIn here. This post will deal with a quick, finger in the air way of seeing how well a strategy scales8211namely, how sensitive it is to latency between signal and execution, using a simple volatility trading strategy as an example. The signal will be the VIXVXV ratio trading VXX and XIV, an idea I got from Volatility Made Simple8217s amazing blog. particularly this post. The three signals compared will be the 8220magical thinking8221 signal (observe the close, buy the close, named from the ruleOrderProc setting in quantstrat), buy on next-day open, and buy on next-day close. Let8217s börjar. So here8217s the run-through. In addition to the magical thinking strategy (observe the close, buy that same close), I tested three other variants8211a variant which transacts the next open, a variant which transacts the next close, and the average of those two. Effectively, I feel these three could give a sense of a strategy8217s performance under more realistic conditions8211that is, how well does the strategy perform if transacted throughout the day, assuming you8217re managing a sum of money too large to just plow into the market in the closing minutes (and if you hope to get rich off of trading, you will have a larger sum of money than the amount you can apply magical thinking to). Ideally, I8217d use VWAP pricing, but as that8217s not available for free anywhere I know of, that means that readers can8217t replicate it even if I had such data. In any case, here are the results. Log scale (for Mr. Tony Cooper and others): My reaction The execute on next day8217s close performance being vastly lower than the other configurations (and that deterioration occurring in the most recent years) essentially means that the fills will have to come pretty quickly at the beginning of the day. While the strategy seems somewhat scalable through the lens of this finger-in-the-air technique, in my opinion, if the first full day of possible execution after signal reception will tank a strategy from a 1.44 Calmar to a .92, that8217s a massive drop-off, after holding everything else constant. In my opinion, I think this is quite a valid question to ask anyone who simply sells signals, as opposed to manages assets. Namely, how sensitive are the signals to execution on the next day After all, unless those signals come at 3:55 PM, one is most likely going to be getting filled the next day. Now, while this strategy is a bit of a tomato can in terms of how good volatility trading strategies can get (they can get a lot better in my opinion), I think it made for a simple little demonstration of this technique. Again, a huge thank you to Mr. Helmuth Vollmeier for so kindly keeping up his dropbox all this time for the volatility data Thanks for reading. NOTE: I am currently contracting in a data science capacity in Chicago. You can email me at ilya. kipnisgmail, or find me on my LinkedIn here. I8217m always open to beers after work if you8217re in the Chicago area. NOTE 2: Today, on October 21, 2015, if you8217re in Chicago, there8217s a Chicago R Users Group conference at Jaks Tap at 6:00 PM. Free pizza, networking, and R, hosted by Paul Teetor, who8217s a finance guy. Hope to see you there. This post deals with an impossible-to-implement statistical arbitrage strategy using VXX and XIV. The strategy is simple: if the average daily return of VXX and XIV was positive, short both of them at the close. This strategy makes two assumptions of varying dubiousness: that one can 8220observe the close and act on the close8221, and that one can short VXX and XIV. So, recently, I decided to play around with everyone8217s two favorite instruments on this blog8211VXX and XIV, with the idea that 8220hey, these two instruments are diametrically opposed, so shouldn8217t there be a stat-arb trade here8221 So, in order to do a lick-finger-in-the-air visualization, I implemented Mike Harris8217s momersion indicator . And then I ran the spread through it. In other words, this spread is certainly mean-reverting at just about all times. And here is the code for the results from 2011 onward, from when the XIV and VXX actually started trading. Here are the equity curves: With the following statistics: In other words, the short side is absolutely amazing as a trade8211except for the one small fact of having it be impossible to actually execute, or at least as far as I8217m aware. Anyhow, this was simply a for-fun post, but hopefully it served some purpose. Tack för att du läser. NOTE: I am currently contracting and am looking to network in the Chicago area. You can find my LinkedIn here. Post navigation CategoriesThis post will introduce John Ehlers8217s Autocorrelation Periodogram mechanism8211a mechanism designed to dynamically find a lookback period. That is, the most common parameter optimized in backtests is the lookback period. Before beginning this post, I must give credit where it8217s due, to one Mr. Fabrizio Maccallini. the head of structured derivatives at Nordea Markets in London. You can find the rest of the repository he did for Dr. John Ehlers8217s Cycle Analytics for Traders on his github. I am grateful and honored that such intelligent and experienced individuals are helping to bring some of Dr. Ehlers8217s methods into R. The point of the Ehlers Autocorrelation Periodogram is to dynamically set a period between a minimum and a maximum period length. While I leave the exact explanation of the mechanic to Dr. Ehlers8217s book, for all practical intents and purposes, in my opinion, the punchline of this method is to attempt to remove a massive source of overfitting from trading system creation8211namely specifying a lookback period. SMA of 50 days 100 days 200 days Well, this algorithm takes that possibility of overfitting out of your hands. Simply, specify an upper and lower bound for your lookback, and it does the rest. How well it does it is a topic of discussion for those well-versed in the methodologies of electrical engineering (I8217m not), so feel free to leave comments that discuss how well the algorithm does its job, and feel free to blog about it as well. In any case, here8217s the original algorithm code, courtesy of Mr. Maccallini: One thing I do notice is that this code uses a loop that says for(i in 1:length(filt)), which is an O(data points) loop, which I view as the plague in R. While I8217ve used Rcpp before, it8217s been for only the most basic of loops, so this is definitely a place where the algorithm can stand to be improved with Rcpp due to R8217s inherent poor looping. Those interested in the exact logic of the algorithm will, once again, find it in John Ehlers8217s Cycle Analytics For Traders book (see link earlier in the post). Of course, the first thing to do is to test how well the algorithm does what it purports to do, which is to dictate the lookback period of an algorithm. Let8217s run it on some data. Now, what does the algorithm-set lookback period look like Let8217s zoom in on 2001 through 2003, when the markets went through some upheaval. In this zoomed-in image, we can see that the algorithm8217s estimates seem fairly jumpy. Here8217s some code to feed the algorithm8217s estimates of n into an indicator to compute an indicator with a dynamic lookback period as set by Ehlers8217s autocorrelation periodogram. And here is the function applied with an SMA, to tune between 120 and 252 days. As seen, this algorithm is less consistent than I would like, at least when it comes to using a simple moving average. For now, I8217m going to leave this code here, and let people experiment with it. I hope that someone will find that this indicator is helpful to them. Tack för att du läser. NOTES: I am always interested in networkingmeet-ups in the northeast (PhiladelphiaNYC). Furthermore, if you believe your firm will benefit from my skills, please do not hesitate to reach out to me. My linkedin profile can be found here. Lastly, I am volunteering to curate the R section for books on quantocracy. If you have a book about R that can apply to finance, be sure to let me know about it, so that I can review it and possibly recommend it. Thakn you. This post will be an in-depth review of Alpha Architect8217s Quantitative Momentum book. Overall, in my opinion, the book is terrific for those that are practitioners in fund management in the individual equity space, and still contains ideas worth thinking about outside of that space. However, the system detailed in the book benefits from nested ranking (rank along axis X, take the top decile, rank along axis Y within the top decile in X, and take the top decile along axis Y, essentially restricting selection to 1 of the universe). Furthermore, the book does not do much to touch upon volatility controls, which may have enhanced the system outlined greatly. Before I get into the brunt of this post, I8217d like to let my readers know that I formalized my nuts and bolts of quantstrat series of posts as a formal datacamp course. Datacamp is a very cheap way to learn a bunch of R, and financial applications are among those topics. My course covers the basics of quantstrat, and if those who complete the course like it, I may very well create more advanced quantstrat modules on datacamp. I8217m hoping that the finance courses are well-received, since there are financial topics in R I8217d like to learn myself that a 45 minute lecture doesn8217t really suffice for (such as Dr. David Matteson8217s change points magic, PortfolioAnalytics, and so on). In any case, here8217s the link. So, let8217s start with a summary of the book: Part 1 is several chapters that are the giant expose - of why momentum works (or at least, has worked for at least 20 years since 1993)8230namely that human biases and irrational behaviors act in certain ways to make the anomaly work. Then there8217s also the career risk (AKA it8217s a risk factor, and so, if your benchmark is SPY and you run across a 3 year period of underperformance, you have severe career risk), and essentially, a whole litany of why a professional asset manager would get fired but if you just stick with the anomaly over many many years and ride out multi-year stretches of relative underperformance, you8217ll come out ahead in the very long run. Generally, I feel like there8217s work to be done if this is the best that can be done, but okay, I8217ll accept it. Essentially, part 1 is for the uninitiated. For those that have been around the momentum block a couple of times, they can skip right past this. Unfortunately, it8217s half the book, so that leaves a little bit of a sour taste in the mouth. Next, part two is where, in my opinion, the real meat and potatoes of the book8211the 8220how8221. Essentially, the algorithm can be boiled down into the following: Taking the universe of large and mid-cap stocks, do the following: 1) Sort the stocks into deciles by 2-12 momentum8211that is, at the end of every month, calculate momentum by last month8217s closing price minus the closing price 12 months ago. Essentially, research states that there8217s a reversion effect on the 1-month momentum. However, this effect doesn8217t carry over into the ETF universe in my experience. 2) Here8217s the interesting part which makes the book worth picking up on its own (in my opinion): after sorting into deciles, rank the top decile by the following metric: multiply the sign of the 2-12 momentum by the following equation: ( negative returns 8211 positive). Essentially, the idea here is to determine smoothness of momentum. That is, in the most extreme situation, imagine a stock that did absolutely nothing for 230 days, and then had one massive day that gave it its entire price appreciation (think Google when it had a 10 jump off of better-than-expected numbers reports), and in the other extreme, a stock that simply had each and every single day be a small positive price appreciation. Obviously, you8217d want the second type of stock. That8217s this idea. Again, sort into deciles, and take the top decile. Therefore, taking the top decile of the top decile leaves you with 1 of the universe. Essentially, this makes the idea very difficult to replicate8211since you8217d need to track down a massive universe of stocks. That stated, I think the expression is actually a pretty good idea as a stand-in for volatility. That is, regardless of how volatile an asset is8211whether it8217s as volatile as a commodity like DBC, or as non-volatile as a fixed-income product like SHY, this expression is an interesting way of stating 8220this path is choppy8221 vs. 8220this path is smooth8221. I might investigate this expression on my blog further in the future. 3) Lastly, if the portfolio is turning over quarterly instead of monthly, the best months to turn it over are the months preceding end-of-quarter month (that is, February, May, August, November) because a bunch of amateur asset managers like to 8220window dress8221 their portfolios. That is, they had a crummy quarter, so at the last month before they have to send out quarterly statements, they load up on some recent winners so that their clients don8217t think they8217re as amateur as they really let on, and there8217s a bump for this. Similarly, January has some selling anomalies due to tax-loss harvesting. As far as practical implementations go, I think this is a very nice touch. Conceding the fact that turning over every month may be a bit too expensive, I like that Wes and Jack say 8220sure, you want to turn it over once every three months, but on which months8221. It8217s a very good question to ask if it means you get an additional percentage point or 150 bps a year from that, as it just might cover the transaction costs and then some. All in all, it8217s a fairly simple to understand strategy. However, the part that sort of gates off the book to a perfect replication is the difficulty in obtaining the CRSP data. However, I do commend Alpha Architect for disclosing the entire algorithm from start to finish. Furthermore, if the basic 2-12 momentum is not enough, there8217s an appendix detailing other types of momentum ideas (earnings momentum, ranking by distance to 52-week highs, absolute historical momentum, and so on). None of these strategies are really that much better than the basic price momentum strategy, so they8217re there for those interested, but it seems there8217s nothing really ground-breaking there. That is, if you8217re trading once a month, there8217s only so many ways of saying 8220hey, I think this thing is going up8221 I also like that Wes and Jack touched on the fact that trend-following, while it doesn8217t improve overall CAGR or Sharpe, does a massive amount to improve on max drawdown. That is, if faced with the prospect of losing 70-80 of everything, and losing only 30, that8217s an easy choice to make. Trend-following is good, even a simplistic version. All in all, I think the book accomplishes what it sets out to do, which is to present a well-researched algorithm. Ultimately, the punchline is on Alpha Architect8217s site (I believe they have some sort of monthly stock filter). Furthermore, the book states that there are better risk-adjusted returns when combined with the algorithm outlined in the 8220quantitative value8221 book. In my experience, I8217ve never had value algorithms impress me in the backtests I8217ve done, but I can chalk that up to me being inexperienced with all the various valuation metrics. My criticism of the book, however, is this: The momentum algorithm in the book misses what I feel is one key component: volatility targeting control. Simply, the paper 8220momentum has its moments8221 (which I covered in my hypothesis-driven development series of posts) essentially states that the usual Fama-French momentum strategy does far better from a risk-reward strategy by deleveraging during times of excessive volatility, and avoiding momentum crashes. I8217m not sure why Wes and Jack didn8217t touch upon this paper, since the implementation is very simple (targetrealized volatility leverage factor). Ideally, I8217d love if Wes or Jack could send me the stream of returns for this strategy (preferably daily, but monthly also works). Essentially, I think this book is very comprehensive. However, I think it also has a somewhat 8220don8217t try this at home8221 feel to it due to the data requirement to replicate it. Certainly, if your broker charges you 8 a transaction, it8217s not a feasible strategy to drop several thousand bucks a year on transaction costs that8217ll just give your returns to your broker. However, I do wonder if the QMOM ETF (from Alpha Architect, of course) is, in fact, a better version of this strategy, outside of the management fee. In any case, my final opinion is this: while this book leaves a little bit of knowledge on the table, on a whole, it accomplishes what it sets out to do, is clear with its procedures, and provides several worthwhile ideas. For the price of a non-technical textbook (aka those 60 books on amazon), this book is a steal. Tack för att du läser. NOTE: While I am currently employed in a successful analytics capacity, I am interested in hearing about full-time positions more closely related to the topics on this blog. If you have a full-time position which can benefit from my current skills, please let me know. My Linkedin can be found here. This post will be about attempting to use the Depmix package for online state prediction. While the depmix package performs admirably when it comes to describing the states of the past, when used for one-step-ahead prediction, under the assumption that tomorrow8217s state will be identical to today8217s, the hidden markov model process found within the package does not perform to expectations. So, to start off, this post was motivated by Michael Halls-Moore, who recently posted some R code about using the depmixS4 library to use hidden markov models. Generally, I am loath to create posts on topics I don8217t feel I have an absolutely front-to-back understanding of, but I8217m doing this in the hope of learning from others on how to appropriately do online state-space prediction, or 8220regime switching8221 detection, as it may be called in more financial parlance. While I8217ve seen the usual theory of hidden markov models (that is, it can rain or it can be sunny, but you can only infer the weather judging by the clothes you see people wearing outside your window when you wake up), and have worked with toy examples in MOOCs (Udacity8217s self-driving car course deals with them, if I recall correctly8211or maybe it was the AI course), at the end of the day, theory is only as good as how well an implementation can work on real data. For this experiment, I decided to take SPY data since inception, and do a full in-sample 8220backtest8221 on the data. That is, given that the HMM algorithm from depmix sees the whole history of returns, with this 8220god8217s eye8221 view of the data, does the algorithm correctly classify the regimes, if the backtest results are any indication Here8217s the code to do so, inspired by Dr. Halls-Moore8217s. Essentially, while I did select three states, I noted that anything with an intercept above zero is a bull state, and below zero is a bear state, so essentially, it reduces to two states. With the result: So, not particularly terrible. The algorithm works, kind of, sort of, right Well, let8217s try online prediction now. So what I did here was I took an expanding window, starting from 500 days since SPY8217s inception, and kept increasing it, by one day at a time. My prediction, was, trivially enough, the most recent day, using a 1 for a bull state, and a -1 for a bear state. I ran this process in parallel (on a linux cluster, because windows8217s doParallel library seems to not even know that certain packages are loaded, and it8217s more messy), and the first big issue is that this process took about three hours on seven cores for about 23 years of data. Not exactly encouraging, but computing time isn8217t expensive these days. So let8217s see if this process actually works. First, let8217s test if the algorithm does what it8217s actually supposed to do and use one day of look-ahead bias (that is, the algorithm tells us the state at the end of the day8211how correct is it even for that day). With the result: So, allegedly, the algorithm seems to do what it was designed to do, which is to classify a state for a given data set. Now, the most pertinent question: how well do these predictions do even one day ahead You8217d think that state space predictions would be parsimonious from day to day, given the long history, correct With the result: That is, without the lookahead bias, the state space prediction algorithm is atrocious. Why is that Well, here8217s the plot of the states: In short, the online hmm algorithm in the depmix package seems to change its mind very easily, with obvious (negative) implications for actual trading strategies. So, that wraps it up for this post. Essentially, the main message here is this: there8217s a vast difference between loading doing descriptive analysis (AKA 8220where have you been, why did things happen8221) vs. predictive analysis (that is, 8220if I correctly predict the future, I get a positive payoff8221). In my opinion, while descriptive statistics have their purpose in terms of explaining why a strategy may have performed how it did, ultimately, we8217re always looking for better prediction tools. In this case, depmix, at least in this 8220out-of-the-box8221 demonstration does not seem to be the tool for that. If anyone has had success with using depmix (or other regime-switching algorithm in R) for prediction, I would love to see work that details the procedure taken, as it8217s an area I8217m looking to expand my toolbox into, but don8217t have any particular good leads. Essentially, I8217d like to think of this post as me describing my own experiences with the package. Tack för att du läser. NOTE: On Oct. 5th, I will be in New York City. On Oct. 6th, I will be presenting at The Trading Show on the Programming Wars panel. NOTE: My current analytics contract is up for review at the end of the year, so I am officially looking for other offers as well. If you have a full-time role which may benefit from the skills you see on my blog, please get in touch with me. My linkedin profile can be found here. This post will introduce component conditional value at risk mechanics found in PerformanceAnalytics from a paper written by Brian Peterson, Kris Boudt, and Peter Carl. This is a mechanism that is an easy-to-call mechanism for computing component expected shortfall in asset returns as they apply to a portfolio. While the exact mechanics are fairly complex, the upside is that the running time is nearly instantaneous, and this method is a solid tool for including in asset allocation analysis. For those interested in an in-depth analysis of the intuition of component conditional value at risk, I refer them to the paper written by Brian Peterson, Peter Carl, and Kris Boudt. Essentially, here8217s the idea: all assets in a given portfolio have a marginal contribution to its total conditional value at risk (also known as expected shortfall)8211that is, the expected loss when the loss surpasses a certain threshold. For instance, if you want to know your 5 expected shortfall, then it8217s the average of the worst 5 returns per 100 days, and so on. For returns using daily resolution, the idea of expected shortfall may sound as though there will never be enough data in a sufficiently fast time frame (on one year or less), the formula for expected shortfall in the PerformanceAnalytics defaults to an approximation calculation using a Cornish-Fisher expansion, which delivers very good results so long as the p-value isn8217t too extreme (that is, it works for relatively sane p values such as the 1-10 range). Component Conditional Value at Risk has two uses: first off, given no input weights, it uses an equal weight default, which allows it to provide a risk estimate for each individual asset without burdening the researcher to create his or her own correlationcovariance heuristics. Secondly, when provided with a set of weights, the output changes to reflect the contribution of various assets in proportion to those weights. This means that this methodology works very nicely with strategies that exclude assets based on momentum, but need a weighting scheme for the remaining assets. Furthermore, using this methodology also allows an ex-post analysis of risk contribution to see which instrument contributed what to risk. First, a demonstration of how the mechanism works using the edhec data set. There is no strategy here, just a demonstration of syntax. This will assume an equal-weight contribution from all of the funds in the edhec data set. So tmp is the contribution to expected shortfall from each of the various edhec managers over the entire time period. Here8217s the output: The salient part of this is the percent contribution (the last output). Notice that it can be negative, meaning that certain funds gain when others lose. At least, this was the case over the current data set. These assets diversify a portfolio and actually lower expected shortfall. In this case, I equally weighted the first ten managers in the edhec data set, and put zero weight in the last three. Furthermore, we can see what happens when the weights are not equal. This time, notice that as the weight increased in the convertible arb manager, so too did his contribution to maximum expected shortfall. For a future backtest, I would like to make some data requests. I would like to use the universe found in Faber8217s Global Asset Allocation book. That said, the simulations in that book go back to 1972, and I was wondering if anyone out there has daily returns for those assetsindices. While some ETFs go back into the early 2000s, there are some that start rather late such as DBC (commodities, early 2006), GLD (gold, early 2004), BWX (foreign bonds, late 2007), and FTY (NAREIT, early 2007). As an eight-year backtest would be a bit short, I was wondering if anyone had data with more history. One other thing, I will in New York for the trading show. and speaking on the 8220programming wars8221 panel on October 6th. Tack för att du läser. NOTE: While I am currently contracting, I am also looking for a permanent position which can benefit from my skills for when my current contract ends. If you have or are aware of such an opening, I will be happy to speak with you. This post will cover a function to simplify creating Harry Long type rebalancing strategies from SeekingAlpha for interested readers. As Harry Long has stated, most, if not all of his strategies are more for demonstrative purposes rather than actual recommended investments. So, since Harry Long has been posting some more articles on Seeknig Alpha, I8217ve had a reader or two ask me to analyze his strategies (again). Instead of doing that, however, I8217ll simply put this tool here, which is a wrapper that automates the acquisition of data and simulates portfolio rebalancing with one line of code. Here8217s the tool. It fetches the data for you (usually from Yahoo, but a big thank you to Mr. Helumth Vollmeier in the case of ZIV and VXX), and has the option of either simply displaying an equity curve and some statistics (CAGR, annualized standard dev, Sharpe, max Drawdown, Calmar), or giving you the return stream as an output if you wish to do more analysis in R. Here8217s an example of simply getting the statistics, with an 80 XLPSPLV (they8217re more or less interchangeable) and 20 TMF (aka 60 TLT, so an 8060 portfolio), from one of Harry Long8217s articles . Nothing out of the ordinary of what we might expect from a balanced equitybonds portfolio. Generally does well, has its largest drawdown in the financial crisis, and some other bumps in the road, but overall, I8217d think a fairly vanilla 8220set it and forget it8221 sort of thing. And here would be the way to get the stream of individual daily returns, assuming you wanted to rebalance these two instruments weekly, instead of yearly (as is the default). And now let8217s get some statistics. Turns out, moving the rebalancing from annually to weekly didn8217t have much of an effect here (besides give a bunch of money to your broker, if you factored in transaction costs, which this doesn8217t). So, that8217s how this tool works. The results, of course, begin from the latest instrument8217s inception. The trick, in my opinion, is to try and find proxy substitutes with longer histories for newer ETFs that are simply leveraged ETFs, such as using a 60 weight in TLT with an 80 weight in XLP instead of a 20 weight in TMF with 80 allocation in SPLV. For instance, here are some proxies: SPXL XLP SPXLUPRO SPY 3 TMF TLT 3 That said, I8217ve worked with Harry Long before, and he develops more sophisticated strategies behind the scenes, so I8217d recommend that SeekingAlpha readers take his publicly released strategies as concept demonstrations, as opposed to fully-fledged investment ideas, and contact Mr. Long himself about more customized, private solutions for investment institutions if you are so interested. Tack för att du läser. NOTE: I am currently in the northeast. While I am currently contracting, I am interested in networking with individuals or firms with regards to potential collaboration opportunities. This post will demonstrate how to take into account turnover when dealing with returns-based data using PerformanceAnalytics and the Return. Portfolio function in R. It will demonstrate this on a basic strategy on the nine sector SPDRs. So, first off, this is in response to a question posed by one Robert Wages on the R-SIG-Finance mailing list. While there are many individuals out there with a plethora of questions (many of which can be found to be demonstrated on this blog already), occasionally, there will be an industry veteran, a PhD statistics student from Stanford, or other very intelligent individual that will ask a question on a topic that I haven8217t yet touched on this blog, which will prompt a post to demonstrate another technical aspect found in R. This is one of those times. So, this demonstration will be about computing turnover in returns space using the PerformanceAnalytics package. Simply, outside of the PortfolioAnalytics package, PerformanceAnalytics with its Return. Portfolio function is the go-to R package for portfolio management simulations, as it can take a set of weights, a set of returns, and generate a set of portfolio returns for analysis with the rest of PerformanceAnalytics8217s functions. Again, the strategy is this: take the 9 three-letter sector SPDRs (since there are four-letter ETFs now), and at the end of every month, if the adjusted price is above its 200-day moving average, invest into it. Normalize across all invested sectors (that is, 19th if invested into all 9, 100 into 1 if only 1 invested into, 100 cash, denoted with a zero return vector, if no sectors are invested into). It8217s a simple, toy strategy, as the strategy isn8217t the point of the demonstration. Here8217s the basic setup code: So, get the SPDRs, put them together, compute their returns, generate the signal, and create the zero vector, since Return. Portfolio treats weights less than 1 as a withdrawal, and weights above 1 as the addition of more capital (big FYI here). Now, here8217s how to compute turnover: So, the trick is this: when you call Return. portfolio, use the verbose TRUE option. This creates several objects, among them returns, BOP. Weight, and EOP. Weight. These stand for Beginning Of Period Weight, and End Of Period Weight. The way that turnover is computed is simply the difference between how the day8217s return moves the allocated portfolio from its previous ending point to where that portfolio actually stands at the beginning of next period. That is, the end of period weight is the beginning of period drift after taking into account the day8217s driftreturn for that asset. The new beginning of period weight is the end of period weight plus any transacting that would have been done. Thus, in order to find the actual transactions (or turnover), one subtracts the previous end of period weight from the beginning of period weight. This is what such transactions look like for this strategy. Something we can do with such data is take a one-year rolling turnover, accomplished with the following code: It looks like this: This essentially means that one year8217s worth of two-way turnover (that is, if selling an entirely invested portfolio is 100 turnover, and buying an entirely new set of assets is another 100, then two-way turnover is 200) is around 800 at maximum. That may be pretty high for some people. Now, here8217s the application when you penalize transaction costs at 20 basis points per percentage point traded (that is, it costs 20 cents to transact 100). So, at 20 basis points on transaction costs, that takes about one percent in returns per year out of this (admittedly, terrible) strategy. This is far from negligible. So, that is how you actually compute turnover and transaction costs. In this case, the transaction cost model was very simple. However, given that Return. portfolio returns transactions at the individual asset level, one could get as complex as they would like with modeling the transaction costs. Tack för att du läser. NOTE: I will be giving a lightning talk at RFinance, so for those attending, you8217ll be able to find me there. This post will outline an easy-to-make mistake in writing vectorized backtests8211namely in using a signal obtained at the end of a period to enter (or exit) a position in that same period. The difference in results one obtains is massive. Today, I saw two separate posts from Alpha Architect and Mike Harris both referencing a paper by Valeriy Zakamulin on the fact that some previous trend-following research by Glabadanidis was done with shoddy results, and that Glabadanidis8217s results were only reproducible through instituting lookahead bias. The following code shows how to reproduce this lookahead bias. First, the setup of a basic moving average strategy on the SampP 500 index from as far back as Yahoo data will provide. And here is how to institute the lookahead bias. These are the 8220results8221: Of course, this equity curve is of no use, so here8217s one in log scale. As can be seen, lookahead bias makes a massive difference. Here are the numerical results: Again, absolutely ridiculous. Note that when using Return. Portfolio (the function in PerformanceAnalytics), that package will automatically give you the next period8217s return, instead of the current one, for your weights. However, for those writing 8220simple8221 backtests that can be quickly done using vectorized operations, an off-by-one error can make all the difference between a backtest in the realm of reasonable, and pure nonsense. However, should one wish to test for said nonsense when faced with impossible-to-replicate results, the mechanics demonstrated above are the way to do it. Now, onto other news: I8217d like to thank Gerald M for staying on top of one of the Logical Invest strategies8211namely, their simple global market rotation strategy outlined in an article from an earlier blog post. Up until March 2015 (the date of the blog post), the strategy had performed well. However, after said date It has been a complete disaster, which, in hindsight, was evident when I passed it through the hypothesis-driven development framework process I wrote about earlier. So, while there has been a great deal written about not simply throwing away a strategy because of short-term underperformance, and that anomalies such as momentum and value exist because of career risk due to said short-term underperformance, it8217s never a good thing when a strategy creates historically large losses, particularly after being published in such a humble corner of the quantitative financial world. In any case, this was a post demonstrating some mechanics, and an update on a strategy I blogged about not too long ago. Tack för att du läser. NOTE: I am always interested in hearing about new opportunities which may benefit from my expertise, and am always happy to network. You can find my LinkedIn profile here . This post will shed light on the values of R2s behind two rather simplistic strategies 8212 the simple 10 month SMA, and its relative, the 10 month momentum (which is simply a difference of SMAs, as Alpha Architect showed in their book DIY Financial Advisor . Not too long ago, a friend of mine named Josh asked me a question regarding R2s in finance. He8217s finishing up his PhD in statistics at Stanford, so when people like that ask me questions, I8217d like to answer them. His assertion is that in some instances, models that have less than perfect predictive power (EG R2s of .4, for instance), can still deliver very promising predictions, and that if someone were to have a financial model that was able to explain 40 of the variance of returns, they could happily retire with that model making them very wealthy. Indeed. 4 is a very optimistic outlook (to put it lightly), as this post will show. In order to illustrate this example, I took two 8220staple8221 strategies 8212 buy SPY when i ts closing monthly price is above its ten month simple moving average, and when its ten month momentum (basically the difference of a ten month moving average and its lag) is positive. While these models are simplistic, they are ubiquitously talked about, and many momentum strategies are an improvement upon these baseline, 8220out-of-the-box8221 strategies. Here8217s the code to do that: And here are the results: In short, the SMA10 and the 10-month momentum (aka ROC 10 aka MOM10) both handily outperform the buy and hold, not only in absolute returns, but especially in risk-adjusted returns (Sharpe and Calmar ratios). Again, simplistic analysis, and many models get much more sophisticated than this, but once again, simple, illustrative example using two strategies that outperform a benchmark (over the long term, anyway). Now, the question is, what was the R2 of these models To answer this, I took a rolling five-year window that essentially asked: how well did these quantities (the ratio between the closing price and the moving average 8211 1, or the ten month momentum) predict the next month8217s returns That is, what proportion of the variance is explained through the monthly returns regressed against the previous month8217s signals in numerical form (perhaps not the best framing, as the signal is binary as opposed to continuous which is what is being regressed, but let8217s set that aside, again, for the sake of illustration). Here8217s the code to generate the answer. And the answer, in pictorial form: In short, even in the best case scenarios, namely, crises which provide momentumtrend-followingcall it what you will its raison d8217etre, that is, its risk management appeal, the proportion of variance explained by the actual signal quantities was very small. In the best of times, around 20. But then again, think about what the R2 value actually is8211it8217s the percentage of variance explained by a predictor. If a small set of signals (let alone one) was able to explain the majority of the change in the returns of the SampP 500, or even a not-insignificant portion, such a person would stand to become very wealthy. More to the point, given that two strategies that handily outperform the market have R2s that are exceptionally low for extended periods of time, it goes to show that holding the R2 up as some form of statistical holy grail certainly is incorrect in the general sense, and anyone who does so either is painting with too broad a brush, is creating disingenuous arguments, or should simply attempt to understand another field which may not work the way their intuition tells them. Tack för att du läser. This review will review the 8220Adaptive Asset Allocation: Dynamic Global Portfolios to Profit in Good Times 8211 and Bad8221 book by the people at ReSolve Asset Management. Overall, this book is a definite must-read for those who have never been exposed to the ideas within it. However, when it comes to a solution that can be fully replicated, this book is lacking. Okay, it8217s been a while since I reviewed my last book, DIY Financial Advisor. from the awesome people at Alpha Architect. This book in my opinion, is set up in a similar sort of format. This is the structure of the book, and my reviews along with it: Part 1: Why in the heck you actually need to have a diversified portfolio, and why a diversified portfolio is a good thing. In a world in which there is so much emphasis put on single-security performance, this is certainly something that absolutely must be stated for those not familiar with portfolio theory. It highlights the example of two people8211one from Abbott Labs, and one from Enron, who had so much of their savings concentrated in their company8217s stock. Mr. Abbott got hit hard and changed his outlook on how to save for retirement, and Mr. Enron was never heard from again. Long story short: a diversified portfolio is good, and a properly diversified portfolio can offset one asset8217s zigs with another asset8217s zags. This is the key to establishing a stream of returns that will help meet financial goals. Basically, this is your common sense story (humans love being told stories) so as to motivate you to read the rest of the book. It does its job, though for someone like me, it8217s more akin to a big 8220wait for it, wait for it8230and there8217s the reason why we should read on, as expected8221. Part 2: Something not often brought up in many corners of the quant world (because it8217s real life boring stuff) is the importance not only of average returns, but when those returns are achieved. Namely, imagine your everyday saver. At the beginning of their careers, they8217re taking home less salary and have less money in their retirement portfolio (or speculation portfolio, but the book uses retirement portfolio). As they get into middle age and closer to retirement, they have a lot more money in said retirement portfolio. Thus, strong returns are most vital when there is more cash available to the portfolio, and the difference between mediocre returns at the beginning and strong returns at the end of one8217s working life as opposed to vice versa is astronomical and cannot be understated. Furthermore, once in retirement, strong returns in the early years matter far more than returns in the later years once money has been withdrawn out of the portfolio (though I8217d hope that a portfolio8217s returns can be so strong that one can simply 8220live off the interest8221). Or, put more intuitively: when you have 10,000 in your portfolio, a 20 drawdown doesn8217t exactly hurt because you can make more money and put more into your retirement account. But when you8217re 62 and have 500,000 and suddenly lose 30 of everything, well, that8217s massive. How much an investor wants to avoid such a scenario cannot be understated. Warren Buffett once said that if you can8217t bear to lose 50 of everything, you shouldn8217t be in stocks. I really like this part of the book because it shows just how dangerous the ideas of 8220a 50 drawdown is unavoidable8221 and other 8220stay invested for the long haul8221 refrains are. Essentially, this part of the book makes a resounding statement that any financial adviser keeping his or her clients invested in equities when they8217re near retirement age is doing something not very advisable, to put it lightly. In my opinion, those who advise pension funds should especially keep this section of the book in mind, since for some people, the long-term may be coming to an end, and what matters is not only steady returns, but to make sure the strategy doesn8217t fall off a cliff and destroy decades of hard-earned savings. Part 3: This part is also one that is a very important read. First off, it lays out in clear terms that the long-term forward-looking valuations for equities are at rock bottom. That is, the expected forward 15-year returns are very low, using approximately 75 years of evidence. Currently, according to the book, equity valuations imply a negative 15-year forward return. However, one thing I will take issue with is that while forward-looking long-term returns for equities may be very low, if one believed this chart and only invested in the stock market when forecast 15-year returns were above the long term average, one would have missed out on both the 2003-2007 bull runs, and the one since 2009 that8217s just about over. So, while the book makes a strong case for caution, readers should also take the chart with a grain of salt in my opinion. However, another aspect of portfolio construction that this book covers is how to construct a robust (assets for any economic environment) and coherent (asset classes balanced in number) universe for implementation with any asset allocation algorithm. I think this bears repeating: universe selection is an extremely important topic in the discussion of asset allocation, yet there is very little discussion about it. Most researchtopics simply take some 8220conventional universe8221, such as 8220all stocks on the NYSE8221, or 8220all the stocks in the SampP 5008221, or 8220the entire set of the 50-60 most liquid futures8221 without consideration for robustness and coherence. This book is the first source I8217ve seen that actually puts this topic under a magnifying glass besides 8220finger in the air pick and choose8221. Part 4: and here8217s where I level my main criticism at this book. For those that have read 8220Adaptive Asset Allocation: A Primer8221. this section of the book is basically one giant copy and paste. It8217s all one large buildup to 8220momentum rank min-variance optimization8221. All well and good, until there8217s very little detail beyond the basics as to how the minimum variance portfolio was constructed. Namely, what exactly is the minimum variance algorithm in use Is it one of the poor variants susceptible to numerical instability inherent in inverting sample covariance matrices Or is it a heuristic like David Varadi8217s minimum variance and minimum correlation algorithm The one feeling I absolutely could not shake was that this book had a perfect opportunity to present a robust approach to minimum variance, and instead, it8217s long on concept, short on details. While the theory of 8220maximize return for unit risk8221 is all well and good, the actual algorithm to implement that theory into practice is not trivial, with the solutions taught to undergrads and master8217s students having some well-known weaknesses. On top of this, one thing that got hammered into my head in the past was that ranking also had a weakness at the inclusionexclusion point. E. G. if, out of ten assets, the fifth asset had a momentum of say, 10.9, and the sixth asset had a momentum of 10.8, how are we so sure the fifth is so much better And while I realize that this book was ultimately meant to be a primer, in my opinion, it would have been a no-objections five-star if there were an appendix that actually went into some detail on how to go from the simple concepts and included a small numerical example of some algorithms that may address the well-known weaknesses. This doesn8217t mean Greekmathematical jargon. Just an appendix that acknowledged that not every reader is someone only picking up his first or second book about systematic investing, and that some of us are familiar with the 8220whys8221 and are more interested in the 8220hows8221. Furthermore, I8217d really love to know where the authors of this book got their data to back-date some of these ETFs into the 90s. Part 5: some more formal research on topics already covered in the rest of the book8211namely a section about how many independent bets one can take as the number of assets grow, if I remember it correctly. Long story short You easily get the most bang for your buck among disparate asset classes, such as treasuries of various duration, commodities, developed vs. emerging equities, and so on, as opposed to trying to pick among stocks in the same asset class (though there8217s some potential for alpha there8230just8230a lot less than you imagine). So in case the idea of asset class selection, not stock selection wasn8217t beaten into the reader8217s head before this point, this part should do the trick. The other research paper is something I briefly skimmed over which went into more depth about volatility and retirement portfolios, though I felt that the book covered this topic earlier on to a sufficient degree by building up the intuition using very understandable scenarios. So that8217s the review of the book. Overall, it8217s a very solid piece of writing, and as far as establishing the why, it does an absolutely superb job. For those that aren8217t familiar with the concepts in this book, this is definitely a must-read, and ASAP. However, for those familiar with most of the concepts and looking for a detailed 8220how8221 procedure, this book does not deliver as much as I would have liked. And I realize that while it8217s a bad idea to publish secret sauce, I bought this book in the hope of being exposed to a new algorithm presented in the understandable and intuitive language that the rest of the book was written in, and was left wanting. Still, that by no means diminishes the impact of the rest of the book. For those who are more likely to be its target audience, it8217s a 55. For those that wanted some specifics, it still has its gem on universe construction. Overall, I rate it a 45. Thanks for reading. Happy new year. This post will be a quick one covering the relationship between the simple moving average and time series momentum. The implication is that one can potentially derive better time series momentum indicators than the classical one applied in so many papers. Okay, so the main idea for this post is quite simple: I8217m sure we8217re all familiar with classical momentum. That is, the price now compared to the price however long ago (3 months, 10 months, 12 months, etc.). E. G. P(now) 8211 P(10) And I8217m sure everyone is familiar with the simple moving average indicator, as well. E. G. SMA(10). Well, as it turns out, these two quantities are actually related. It turns out, if instead of expressing momentum as the difference of two numbers, it is expressed as the sum of returns, it can be written (for a 10 month momentum) as: MOM10 return of this month return of last month return of 2 months ago 8230 return of 9 months ago, for a total of 10 months in our little example. This can be written as MOM10 (P(0) 8211 P(1)) (P(1) 8211 P(2)) 8230 (P(9) 8211 P(10)). (Each difference within parentheses denotes one month8217s worth of returns.) Which can then be rewritten by associative arithmetic as: (P(0) P(1) 8230 P(9)) 8211 (P(1) P(2) 8230 P(10)). In other words, momentum 8212 aka the difference between two prices, can be rewritten as the difference between two cumulative sums of prices. And what is a simple moving average Simply a cumulative sum of prices divided by however many prices summed over. Here8217s some R code to demonstrate. With the resulting number of times these two signals are equal: In short, every time. Now, what exactly is the punchline of this little example Here8217s the punchline: The simple moving average is8230fairly simplistic as far as filters go. It works as a pedagogical example, but it has some well known weaknesses regarding lag, windowing effects, and so on. Here8217s a toy example how one can get a different momentum signal by changing the filter. With the following results: While the difference of EMA10 strategy didn8217t do better than the difference of SMA10 (aka standard 10-month momentum), that8217s not the point. The point is that the momentum signal is derived from a simple moving average filter, and that by using a different filter, one can still use a momentum type of strategy. Or, put differently, the maingeneral takeaway here is that momentum is the slope of a filter, and one can compute momentum in an infinite number of ways depending on the filter used, and can come up with a myriad of different momentum strategies. Tack för att du läser. NOTE: I am currently contracting in Chicago, and am always open to networking. Contact me at my email at ilya. kipnisgmail or find me on my LinkedIn here. Post navigation CategoriesA Hammer Trading System Demonstrating Custom Indicator-Based Limit Orders in Quantstrat So several weeks ago, I decided to listen on a webinar (and myself will be giving one on using quantstrat on Sep. 3 for Big Mikes Trading, see link ). Among some of those talks was a trading system called the Trend Turn Trade Take Profit system. This is his system: Define an uptrend as an SMA10 above an SMA30. Define a pullback as an SMA5 below an SMA10. Define a hammer as a candle with an upper shadow less than 20 of the lower shadow, and a body less than 50 of the lower shadow. Enter on the high of the hammer, with the stop loss set at the low of the hammer and an additional one third of the range. The take profit target is 1.5 to 1.7 times the distance between the entry and the stop price. Additionally (not tested here) was the bullish engulfing pattern, which is a two-bar pattern with the conditions of a down day followed by an up day on which the open of the up day was less than the close of the down day, and the close of the up day was higher than the previous days open, with the stop set to the low of the pattern, and the profit target in the same place. This system was advertised to be correct about 70 of the time, with trades whose wins were 1.6 times as much as the losses, so I decided to investigate it. The upside to this post, in addition to investigating someone elses system, is that it will allow me to demonstrate how to create more nuanced orders with quantstrat. The best selling point for quantstrat, in my opinion, is that it provides a framework to do just about anything you want, provided you know how to do it (not trivial). In any case, the salient thing to take from this strategy is that its possible to create some interesting custom orders with some nuanced syntax. Heres the syntax for this strategy: I added one additional rule to the strategy in that if the trend reverses (SMA10 lt SMA30), to get out of the trade. First off, lets take a closer look at the entry and exit rules. The rules used here use a few new concepts that I havent used in previous blog posts. First off, the argument of orderset puts all the orders within one order set as a one-canceling-the-other mechanism. Next, the order. price syntax works similarly to the market data syntax on specifying indicators EG add. indicator(strategy. st, nameSMA, argumentslist(xquote(Cl(mktdata)), etc), except this time, it specifies a certain column in the market data (which is, in fact, what Cl(mktdata) does, or HLC(mktdata), and so on), but also, the timestamp syntax is necessary so it knows what specific quantity in time is being referred to. For take-profit orders, as you want to sell above the market, or buy below the market, the correct type of order (that is, the ordertype argument) is a limit order. With stop-losses or trailing stops (not shown here), since you want to sell below the market or buy above the market, the correct ordertype is a stoplimit order. Finally, the rule I added (the SMA exit) actually improves the strategys performance (I wanted to give this system the benefit of the doubt). Here are the results, with the strategy leveraged up to .1 pctATR (the usual strategies I test range betwe en .02 and .04): In short, looking at the trade stats, this system isfar from what was advertised. In fact, heres the equity curve. Anything but spectacular the past several years, which is why I suppose it was free to give it away in a webinar. Overall, however, the past several years have just seen the SampP just continue to catch up to this strategy. At the end of the day, its a highly unimpressive system in my opinion, and I wont be exploring the other aspects of it further. However, as an exercise in showing some nuanced features of quantstrat, I think this was a worthwhile endeavor. Tack för att du läser.

No comments:

Post a Comment